Feb 18 00:25:25 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 00:25:25 crc restorecon[4751]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:25 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 18 00:25:26 crc restorecon[4751]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 18 00:25:27 crc kubenswrapper[4847]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.120188 4847 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128401 4847 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128445 4847 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128456 4847 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128466 4847 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128476 4847 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128486 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128495 4847 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128504 4847 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128514 4847 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128525 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128535 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128544 4847 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128553 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128563 4847 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128572 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128582 4847 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128593 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128637 4847 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128649 4847 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128660 4847 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128670 4847 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128682 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128695 4847 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128707 4847 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128716 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128725 4847 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128734 4847 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128743 4847 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128767 4847 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128777 4847 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128785 4847 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128794 4847 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128805 4847 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128821 4847 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128841 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128852 4847 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128864 4847 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128875 4847 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128887 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128901 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128913 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128925 4847 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128938 4847 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128951 4847 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128963 4847 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128975 4847 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128985 4847 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.128998 4847 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129008 4847 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129017 4847 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129026 4847 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129034 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129042 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129051 4847 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129059 4847 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129067 4847 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129076 4847 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129085 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129093 4847 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129101 4847 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129110 4847 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129118 4847 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129126 4847 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129135 4847 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129144 4847 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129153 4847 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129161 4847 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129170 4847 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129180 4847 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129192 4847 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.129202 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130269 4847 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130324 4847 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130348 4847 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130364 4847 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130381 4847 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130395 4847 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130412 4847 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130431 4847 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130443 4847 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130456 4847 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130471 4847 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130484 4847 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130497 4847 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130508 4847 flags.go:64] FLAG: --cgroup-root="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130520 4847 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130533 4847 flags.go:64] FLAG: --client-ca-file="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130545 4847 flags.go:64] FLAG: --cloud-config="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130558 4847 flags.go:64] FLAG: --cloud-provider="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130569 4847 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130587 4847 flags.go:64] FLAG: --cluster-domain="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130641 4847 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130655 4847 flags.go:64] FLAG: --config-dir="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130667 4847 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130681 4847 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130698 4847 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130714 4847 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130727 4847 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130740 4847 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130754 4847 flags.go:64] FLAG: --contention-profiling="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130766 4847 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130778 4847 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130791 4847 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130803 4847 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130836 4847 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130849 4847 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130860 4847 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130873 4847 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130886 4847 flags.go:64] FLAG: --enable-server="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130898 4847 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130914 4847 flags.go:64] FLAG: --event-burst="100" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130928 4847 flags.go:64] FLAG: --event-qps="50" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130942 4847 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130955 4847 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130972 4847 flags.go:64] FLAG: --eviction-hard="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.130988 4847 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131001 4847 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131014 4847 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131029 4847 flags.go:64] FLAG: --eviction-soft="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131041 4847 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131054 4847 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131066 4847 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131076 4847 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131086 4847 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131096 4847 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131105 4847 flags.go:64] FLAG: --feature-gates="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131117 4847 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131127 4847 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131138 4847 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131148 4847 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131158 4847 flags.go:64] FLAG: --healthz-port="10248" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131168 4847 flags.go:64] FLAG: --help="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131178 4847 flags.go:64] FLAG: --hostname-override="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131188 4847 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131198 4847 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131208 4847 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131217 4847 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131226 4847 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131237 4847 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131246 4847 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131256 4847 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131265 4847 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131275 4847 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131285 4847 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131295 4847 flags.go:64] FLAG: --kube-reserved="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131347 4847 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131359 4847 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131369 4847 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131378 4847 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131389 4847 flags.go:64] FLAG: --lock-file="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131399 4847 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131409 4847 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131419 4847 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131434 4847 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131445 4847 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131454 4847 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131464 4847 flags.go:64] FLAG: --logging-format="text" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131473 4847 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131484 4847 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131494 4847 flags.go:64] FLAG: --manifest-url="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131503 4847 flags.go:64] FLAG: --manifest-url-header="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131516 4847 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131526 4847 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131538 4847 flags.go:64] FLAG: --max-pods="110" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131548 4847 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131558 4847 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131568 4847 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131577 4847 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131588 4847 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131635 4847 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131647 4847 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131670 4847 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131680 4847 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131690 4847 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131700 4847 flags.go:64] FLAG: --pod-cidr="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131710 4847 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131725 4847 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131735 4847 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131746 4847 flags.go:64] FLAG: --pods-per-core="0" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131757 4847 flags.go:64] FLAG: --port="10250" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131767 4847 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131777 4847 flags.go:64] FLAG: --provider-id="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131787 4847 flags.go:64] FLAG: --qos-reserved="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131797 4847 flags.go:64] FLAG: --read-only-port="10255" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131807 4847 flags.go:64] FLAG: --register-node="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131817 4847 flags.go:64] FLAG: --register-schedulable="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131827 4847 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131843 4847 flags.go:64] FLAG: --registry-burst="10" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131853 4847 flags.go:64] FLAG: --registry-qps="5" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131862 4847 flags.go:64] FLAG: --reserved-cpus="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131872 4847 flags.go:64] FLAG: --reserved-memory="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131884 4847 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131894 4847 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131904 4847 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131914 4847 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131923 4847 flags.go:64] FLAG: --runonce="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131933 4847 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131943 4847 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131954 4847 flags.go:64] FLAG: --seccomp-default="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131963 4847 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131974 4847 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131984 4847 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.131994 4847 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132003 4847 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132013 4847 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132022 4847 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132034 4847 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132044 4847 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132054 4847 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132064 4847 flags.go:64] FLAG: --system-cgroups="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132075 4847 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132091 4847 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132100 4847 flags.go:64] FLAG: --tls-cert-file="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132110 4847 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132125 4847 flags.go:64] FLAG: --tls-min-version="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132134 4847 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132144 4847 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132154 4847 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132163 4847 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132173 4847 flags.go:64] FLAG: --v="2" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132186 4847 flags.go:64] FLAG: --version="false" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132198 4847 flags.go:64] FLAG: --vmodule="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132212 4847 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.132222 4847 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132457 4847 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132471 4847 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132483 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132496 4847 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132508 4847 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132519 4847 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132529 4847 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132539 4847 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132549 4847 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132558 4847 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132567 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132576 4847 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132584 4847 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132593 4847 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132631 4847 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132641 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132649 4847 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132658 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132668 4847 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132676 4847 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132684 4847 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132693 4847 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132702 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132710 4847 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132719 4847 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132727 4847 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132735 4847 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132743 4847 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132751 4847 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132760 4847 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132768 4847 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132777 4847 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132870 4847 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132879 4847 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132892 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132901 4847 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132910 4847 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132922 4847 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132933 4847 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132943 4847 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132953 4847 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132963 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132971 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132985 4847 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.132999 4847 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133012 4847 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133027 4847 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133039 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133049 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133059 4847 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133069 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133078 4847 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133087 4847 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133095 4847 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133103 4847 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133112 4847 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133120 4847 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133129 4847 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133138 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133146 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133154 4847 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133163 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133171 4847 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133179 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133188 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133196 4847 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133205 4847 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133213 4847 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133224 4847 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133235 4847 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.133248 4847 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.133298 4847 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.144662 4847 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.144705 4847 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144843 4847 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144868 4847 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144879 4847 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144890 4847 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144899 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144907 4847 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144916 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144924 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144933 4847 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144943 4847 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144952 4847 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144960 4847 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144968 4847 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144977 4847 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144986 4847 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.144994 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145005 4847 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145015 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145027 4847 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145036 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145045 4847 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145053 4847 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145063 4847 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145071 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145080 4847 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145089 4847 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145098 4847 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145108 4847 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145120 4847 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145133 4847 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145144 4847 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145154 4847 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145164 4847 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145173 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145184 4847 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145193 4847 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145203 4847 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145214 4847 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145224 4847 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145235 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145245 4847 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145255 4847 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145264 4847 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145272 4847 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145280 4847 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145289 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145297 4847 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145306 4847 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145314 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145322 4847 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145331 4847 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145340 4847 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145349 4847 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145357 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145365 4847 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145374 4847 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145385 4847 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145396 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145405 4847 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145414 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145425 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145433 4847 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145442 4847 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145450 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145458 4847 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145467 4847 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145475 4847 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145483 4847 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145492 4847 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145501 4847 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145510 4847 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.145525 4847 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145828 4847 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145844 4847 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145854 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145863 4847 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145873 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145883 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145893 4847 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145902 4847 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145911 4847 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145920 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145929 4847 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145939 4847 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145948 4847 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145957 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145965 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145976 4847 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145988 4847 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.145996 4847 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146008 4847 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146020 4847 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146030 4847 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146040 4847 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146049 4847 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146058 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146067 4847 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146076 4847 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146084 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146092 4847 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146101 4847 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146111 4847 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146119 4847 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146127 4847 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146136 4847 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146144 4847 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146153 4847 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146162 4847 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146170 4847 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146178 4847 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146187 4847 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146196 4847 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146207 4847 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146218 4847 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146228 4847 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146237 4847 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146246 4847 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146255 4847 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146264 4847 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146273 4847 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146282 4847 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146292 4847 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146301 4847 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146310 4847 feature_gate.go:330] unrecognized feature gate: Example Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146319 4847 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146327 4847 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146336 4847 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146344 4847 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146353 4847 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146361 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146372 4847 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146383 4847 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146392 4847 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146402 4847 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146411 4847 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146419 4847 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146428 4847 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146436 4847 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146445 4847 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146453 4847 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146462 4847 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146471 4847 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.146481 4847 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.146494 4847 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.148335 4847 server.go:940] "Client rotation is on, will bootstrap in background" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.166340 4847 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.166480 4847 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.168242 4847 server.go:997] "Starting client certificate rotation" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.168292 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.169291 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-06 04:21:14.219555497 +0000 UTC Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.169374 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.194415 4847 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.197415 4847 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.199971 4847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.221016 4847 log.go:25] "Validated CRI v1 runtime API" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.267552 4847 log.go:25] "Validated CRI v1 image API" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.269682 4847 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.276271 4847 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-18-00-20-59-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.276330 4847 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.304747 4847 manager.go:217] Machine: {Timestamp:2026-02-18 00:25:27.301981412 +0000 UTC m=+0.679332414 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:203b95f6-5cb7-4117-864d-f1073ddd6998 BootID:11f7a530-3cae-485b-860e-571ec4f730a1 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d4:13:30 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d4:13:30 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4b:57:89 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:aa:0c:09 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a0:fb:7a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e2:b6:0d Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:7c:1a:74 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:26:c1:6d:59:6b:fa Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:e2:a8:14:c6:80 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.305171 4847 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.305354 4847 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.307641 4847 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.307958 4847 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.308017 4847 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.309332 4847 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.309362 4847 container_manager_linux.go:303] "Creating device plugin manager" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.309869 4847 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.309914 4847 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.310157 4847 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.310319 4847 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.316721 4847 kubelet.go:418] "Attempting to sync node with API server" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.316763 4847 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.316789 4847 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.316810 4847 kubelet.go:324] "Adding apiserver pod source" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.316829 4847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.321147 4847 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.321715 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.321792 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.322467 4847 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.322590 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.322720 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.324986 4847 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326542 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326571 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326584 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326596 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326659 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326674 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326686 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326704 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326757 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326772 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326806 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.326819 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.327678 4847 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.328366 4847 server.go:1280] "Started kubelet" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.329333 4847 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 00:25:27 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.330397 4847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.334506 4847 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.335084 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.335170 4847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.336044 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:57:50.875499729 +0000 UTC Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.336314 4847 server.go:460] "Adding debug handlers to kubelet server" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.337019 4847 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.337857 4847 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.337986 4847 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.338550 4847 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.339108 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.339457 4847 factory.go:55] Registering systemd factory Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.339491 4847 factory.go:221] Registration of the systemd container factory successfully Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.339911 4847 factory.go:153] Registering CRI-O factory Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.339943 4847 factory.go:221] Registration of the crio container factory successfully Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.340017 4847 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.340039 4847 factory.go:103] Registering Raw factory Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.340055 4847 manager.go:1196] Started watching for new ooms in manager Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.341941 4847 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.343002 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.343104 4847 manager.go:319] Starting recovery of all containers Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.343186 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.349778 4847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18952f90c8a6e4c3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:25:27.328335043 +0000 UTC m=+0.705685995,LastTimestamp:2026-02-18 00:25:27.328335043 +0000 UTC m=+0.705685995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.356258 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.356496 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.356653 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.356769 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.356890 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357004 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357110 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357270 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357397 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357513 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357655 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357771 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.357879 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358015 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358105 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358218 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358320 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358415 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358538 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358669 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358779 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358874 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.358977 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359080 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359241 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359365 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359488 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359592 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359752 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359882 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.359997 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360108 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360257 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360384 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360554 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360709 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360864 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.360963 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361058 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361144 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361223 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361329 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361440 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361555 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361697 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361822 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.361930 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362044 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362155 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362261 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362382 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362485 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362632 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362760 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362859 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.362954 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.363057 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.363192 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.365317 4847 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.365467 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.365621 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.365761 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.365878 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366022 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366147 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366264 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366382 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366522 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366550 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366573 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366596 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366766 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366779 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366795 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366808 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366823 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366837 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366852 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366873 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366885 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.366899 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367099 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367146 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367170 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367188 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367213 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367264 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367276 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367292 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367305 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367319 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367331 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367344 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367356 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367386 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367398 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367410 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367423 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367440 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367451 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367463 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367474 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367562 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367577 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367597 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367702 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367726 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367747 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367762 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367775 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367815 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367830 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367842 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367880 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367897 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367912 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367928 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.367943 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368053 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368066 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368077 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368088 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368098 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368110 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368121 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368134 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368234 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368249 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368312 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368327 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368372 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368397 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368413 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368433 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368466 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368477 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368493 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368504 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368515 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368526 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368645 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368669 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368723 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368745 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368765 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368859 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368871 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368904 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368916 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368929 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368974 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.368986 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369060 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369072 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369082 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369091 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369143 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369155 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369189 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369201 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369279 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369298 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369333 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369349 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369362 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369376 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369430 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369441 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369562 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369575 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369595 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369647 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369658 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369667 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369724 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369734 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369744 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369774 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369784 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369794 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369809 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369820 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369874 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369889 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369899 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369914 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369924 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369984 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.369995 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370011 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370031 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370040 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370050 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370061 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370072 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370084 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370095 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370105 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370116 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370126 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370141 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370156 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370173 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370192 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370204 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370215 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370226 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370243 4847 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370255 4847 reconstruct.go:97] "Volume reconstruction finished" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370266 4847 reconciler.go:26] "Reconciler: start to sync state" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.370283 4847 manager.go:324] Recovery completed Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.395722 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.398356 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.398401 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.398414 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.399542 4847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.401431 4847 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.401505 4847 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.401532 4847 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.402763 4847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.402864 4847 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.402934 4847 kubelet.go:2335] "Starting kubelet main sync loop" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.403056 4847 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 00:25:27 crc kubenswrapper[4847]: W0218 00:25:27.404895 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.404974 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.428195 4847 policy_none.go:49] "None policy: Start" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.429683 4847 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.429749 4847 state_mem.go:35] "Initializing new in-memory state store" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.437948 4847 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.503908 4847 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.505851 4847 manager.go:334] "Starting Device Plugin manager" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.506028 4847 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.506052 4847 server.go:79] "Starting device plugin registration server" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.507073 4847 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.507107 4847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.507811 4847 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.508026 4847 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.508050 4847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.515919 4847 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.540640 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.607816 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.609982 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.610029 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.610040 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.610078 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.610752 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.704408 4847 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.704573 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706091 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706139 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706148 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706285 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706678 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.706748 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707234 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707243 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707338 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707675 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.707768 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.708931 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.708960 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.708969 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709038 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709054 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709113 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709530 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709573 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709597 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709578 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709959 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.709996 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710152 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710295 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710332 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710421 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710466 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710484 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.710970 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711008 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711025 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711060 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711085 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711094 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711311 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.711358 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.712427 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.712448 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.712457 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779069 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779141 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779171 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779190 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779211 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779231 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779299 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779401 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779466 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779510 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779677 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779775 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779828 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779879 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.779936 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.811070 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.812940 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.812984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.812999 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.813029 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.813624 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882129 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882242 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882365 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882373 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882433 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882507 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882641 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882778 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882827 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882907 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.882933 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883005 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883082 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883183 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883249 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883338 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883357 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883446 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883462 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883517 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883497 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883471 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883730 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883795 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883938 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.883848 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.884051 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.884136 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.884226 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: I0218 00:25:27.884385 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:27 crc kubenswrapper[4847]: E0218 00:25:27.942668 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.055213 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.072256 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.106182 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.114209 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-7f7252f73e094b8a1fce4d3468de2df8e43125b3d51ef3aa0985a68aa9925a38 WatchSource:0}: Error finding container 7f7252f73e094b8a1fce4d3468de2df8e43125b3d51ef3aa0985a68aa9925a38: Status 404 returned error can't find the container with id 7f7252f73e094b8a1fce4d3468de2df8e43125b3d51ef3aa0985a68aa9925a38 Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.115418 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.119701 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.122076 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6789610d34c0e845cf22c6ec2ea8e6445240d0b024fada46e47a166f93ee8c7a WatchSource:0}: Error finding container 6789610d34c0e845cf22c6ec2ea8e6445240d0b024fada46e47a166f93ee8c7a: Status 404 returned error can't find the container with id 6789610d34c0e845cf22c6ec2ea8e6445240d0b024fada46e47a166f93ee8c7a Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.139345 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-fdf477934253f3e59b27d163a202032ccfb5919820869d97e2b920b07af75d16 WatchSource:0}: Error finding container fdf477934253f3e59b27d163a202032ccfb5919820869d97e2b920b07af75d16: Status 404 returned error can't find the container with id fdf477934253f3e59b27d163a202032ccfb5919820869d97e2b920b07af75d16 Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.143325 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4e10424dcabf2ee0b7e51bd547ee827ab6dfd32bbaa70629bfe020015085eacc WatchSource:0}: Error finding container 4e10424dcabf2ee0b7e51bd547ee827ab6dfd32bbaa70629bfe020015085eacc: Status 404 returned error can't find the container with id 4e10424dcabf2ee0b7e51bd547ee827ab6dfd32bbaa70629bfe020015085eacc Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.152688 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-2b08e85f04330855d772e2da03f7158860d3d43f89805097bbb88f7f7a33a3ca WatchSource:0}: Error finding container 2b08e85f04330855d772e2da03f7158860d3d43f89805097bbb88f7f7a33a3ca: Status 404 returned error can't find the container with id 2b08e85f04330855d772e2da03f7158860d3d43f89805097bbb88f7f7a33a3ca Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.202280 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.202407 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.214085 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.216639 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.216718 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.216741 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.216833 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.217352 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.336834 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:22:45.66728522 +0000 UTC Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.342787 4847 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.380660 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.380723 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.411518 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2b08e85f04330855d772e2da03f7158860d3d43f89805097bbb88f7f7a33a3ca"} Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.412929 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4e10424dcabf2ee0b7e51bd547ee827ab6dfd32bbaa70629bfe020015085eacc"} Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.413959 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fdf477934253f3e59b27d163a202032ccfb5919820869d97e2b920b07af75d16"} Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.417215 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6789610d34c0e845cf22c6ec2ea8e6445240d0b024fada46e47a166f93ee8c7a"} Feb 18 00:25:28 crc kubenswrapper[4847]: I0218 00:25:28.418416 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f7252f73e094b8a1fce4d3468de2df8e43125b3d51ef3aa0985a68aa9925a38"} Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.447753 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.447864 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.744093 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 00:25:28 crc kubenswrapper[4847]: W0218 00:25:28.912213 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:28 crc kubenswrapper[4847]: E0218 00:25:28.912295 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.018156 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.019477 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.019552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.019580 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.019654 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:29 crc kubenswrapper[4847]: E0218 00:25:29.020314 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.337854 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:51:06.202402174 +0000 UTC Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.342845 4847 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.348853 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:25:29 crc kubenswrapper[4847]: E0218 00:25:29.350168 4847 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.422976 4847 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164" exitCode=0 Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.423086 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.423231 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.424723 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.424755 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.424774 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.426031 4847 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33" exitCode=0 Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.426105 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.426200 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.427225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.427256 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.427269 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.429584 4847 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236" exitCode=0 Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.429730 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.429721 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.430676 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.430719 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.430734 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.431839 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1" exitCode=0 Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.431891 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.431968 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.432718 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.432734 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.432743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.435621 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.435648 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.435658 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.435667 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa"} Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.435732 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.436273 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.436296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.436303 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.437149 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.437846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.437862 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.437873 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:29 crc kubenswrapper[4847]: I0218 00:25:29.785227 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:29 crc kubenswrapper[4847]: W0218 00:25:29.867541 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:29 crc kubenswrapper[4847]: E0218 00:25:29.867650 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.047806 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.338623 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:20:46.633932011 +0000 UTC Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.344455 4847 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:30 crc kubenswrapper[4847]: E0218 00:25:30.344532 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.442193 4847 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6" exitCode=0 Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.442269 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.442425 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.443746 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.443819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.443831 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.448990 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.449149 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.450302 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.450326 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.450337 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.452869 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.452898 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.452910 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.452980 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.453619 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.453636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.453645 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.456265 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.456755 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.456784 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.456805 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.456819 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52"} Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.457111 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.457148 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.457159 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:30 crc kubenswrapper[4847]: W0218 00:25:30.557294 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Feb 18 00:25:30 crc kubenswrapper[4847]: E0218 00:25:30.557373 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.621055 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.622048 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.622089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.622098 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:30 crc kubenswrapper[4847]: I0218 00:25:30.622118 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:30 crc kubenswrapper[4847]: E0218 00:25:30.625949 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.339616 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:45:48.652276808 +0000 UTC Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.461813 4847 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9" exitCode=0 Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.461920 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.461939 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9"} Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.462966 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.463024 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.463042 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.468095 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.469160 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.469909 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832"} Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.469960 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.470054 4847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.470095 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.470832 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475564 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475619 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475628 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475880 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475912 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.475922 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.478871 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.478891 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.478902 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.478933 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.478978 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:31 crc kubenswrapper[4847]: I0218 00:25:31.479004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.339918 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:02:38.994628654 +0000 UTC Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477652 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300"} Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477715 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477739 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579"} Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477763 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103"} Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477827 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.477912 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.478943 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.479035 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.479062 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.479063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.479096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:32 crc kubenswrapper[4847]: I0218 00:25:32.479109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.340059 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:00:29.419727887 +0000 UTC Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.485147 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff"} Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.485224 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a"} Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.485258 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.485258 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.486765 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.486817 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.486833 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.487010 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.487080 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.487108 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.613587 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.671882 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.707649 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.826320 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.834282 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.834354 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.834373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.834415 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.935307 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.935748 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.937799 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.937859 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:33 crc kubenswrapper[4847]: I0218 00:25:33.937920 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.341172 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:10:34.029551635 +0000 UTC Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.488253 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.488273 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.489577 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.489631 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.489655 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.490001 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.490037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:34 crc kubenswrapper[4847]: I0218 00:25:34.490047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.204849 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.205042 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.206019 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.206044 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.206052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.341333 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:28:47.114519509 +0000 UTC Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.490195 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.491166 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.491203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:35 crc kubenswrapper[4847]: I0218 00:25:35.491215 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:36 crc kubenswrapper[4847]: I0218 00:25:36.341480 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 20:56:10.325888152 +0000 UTC Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.047903 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.048135 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.049565 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.049616 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.049631 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:37 crc kubenswrapper[4847]: I0218 00:25:37.341720 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:31:43.880891239 +0000 UTC Feb 18 00:25:37 crc kubenswrapper[4847]: E0218 00:25:37.516212 4847 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.341899 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:35:14.951864117 +0000 UTC Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.485274 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.485483 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.486630 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.486662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:38 crc kubenswrapper[4847]: I0218 00:25:38.486672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.224244 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.224486 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.225962 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.225988 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.225999 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.228259 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.342623 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:58:41.805404255 +0000 UTC Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.499960 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.501442 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.501522 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:39 crc kubenswrapper[4847]: I0218 00:25:39.501548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:40 crc kubenswrapper[4847]: I0218 00:25:40.239512 4847 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 00:25:40 crc kubenswrapper[4847]: I0218 00:25:40.239587 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 00:25:40 crc kubenswrapper[4847]: I0218 00:25:40.343566 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:03:20.255765251 +0000 UTC Feb 18 00:25:41 crc kubenswrapper[4847]: W0218 00:25:41.014228 4847 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.014776 4847 trace.go:236] Trace[1290153168]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:25:31.011) (total time: 10002ms): Feb 18 00:25:41 crc kubenswrapper[4847]: Trace[1290153168]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (00:25:41.014) Feb 18 00:25:41 crc kubenswrapper[4847]: Trace[1290153168]: [10.002799381s] [10.002799381s] END Feb 18 00:25:41 crc kubenswrapper[4847]: E0218 00:25:41.015054 4847 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.223657 4847 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.223743 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.227913 4847 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.228032 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:25:41 crc kubenswrapper[4847]: I0218 00:25:41.344127 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 03:20:55.919339682 +0000 UTC Feb 18 00:25:42 crc kubenswrapper[4847]: I0218 00:25:42.225525 4847 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 00:25:42 crc kubenswrapper[4847]: I0218 00:25:42.225712 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 00:25:42 crc kubenswrapper[4847]: I0218 00:25:42.344992 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:19:35.910898252 +0000 UTC Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.345272 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:20:17.658402261 +0000 UTC Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.621165 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.621801 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.623504 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.623565 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.623588 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:43 crc kubenswrapper[4847]: I0218 00:25:43.626578 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.346308 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:56:35.756174984 +0000 UTC Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.511135 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.512392 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.512465 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.512493 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:44 crc kubenswrapper[4847]: I0218 00:25:44.770211 4847 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:45 crc kubenswrapper[4847]: I0218 00:25:45.347340 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:11:35.187020447 +0000 UTC Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.213861 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.215871 4847 trace.go:236] Trace[1738663102]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:25:31.248) (total time: 14966ms): Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[1738663102]: ---"Objects listed" error: 14966ms (00:25:46.215) Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[1738663102]: [14.966986451s] [14.966986451s] END Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.216369 4847 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.218915 4847 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.222728 4847 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.223306 4847 trace.go:236] Trace[617501361]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:25:33.940) (total time: 12283ms): Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[617501361]: ---"Objects listed" error: 12283ms (00:25:46.223) Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[617501361]: [12.283061482s] [12.283061482s] END Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.223343 4847 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.223925 4847 trace.go:236] Trace[214277514]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:25:35.590) (total time: 10633ms): Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[214277514]: ---"Objects listed" error: 10632ms (00:25:46.223) Feb 18 00:25:46 crc kubenswrapper[4847]: Trace[214277514]: [10.633091156s] [10.633091156s] END Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.224041 4847 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.233546 4847 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.251967 4847 csr.go:261] certificate signing request csr-9kg5m is approved, waiting to be issued Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.261134 4847 csr.go:257] certificate signing request csr-9kg5m is issued Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.262879 4847 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.262925 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.263414 4847 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.263462 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.328694 4847 apiserver.go:52] "Watching apiserver" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.331250 4847 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.331530 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.331939 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.332009 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.332073 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.332120 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.332142 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.332269 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.332343 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.332623 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.332734 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.334639 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.334870 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.335018 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.335215 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.335483 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.335731 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.335987 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.337964 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.339136 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.339527 4847 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.348167 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:29:55.530886318 +0000 UTC Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.360828 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.370766 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.408219 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425075 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425124 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425154 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425175 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425197 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425216 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425235 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425254 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425277 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425296 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425337 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425358 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425377 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425399 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425422 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425445 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425465 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425484 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425504 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425536 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425580 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425618 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425638 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425658 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425681 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425703 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425723 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425744 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425763 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425785 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425805 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425823 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425848 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425872 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425896 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425917 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425939 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425958 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425978 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425996 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426014 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426035 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426054 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426085 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426107 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426127 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426151 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426170 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426209 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426229 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426251 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426272 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426293 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426314 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426335 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426353 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426374 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426394 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426413 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426431 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.426451 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429770 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429823 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429852 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429872 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429899 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429921 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429978 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430069 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430120 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430146 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430202 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430230 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430254 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430274 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430296 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430320 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430409 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430448 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430474 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430498 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430519 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430541 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430563 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430580 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430636 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430661 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430697 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430719 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430744 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430767 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430808 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430831 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430940 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430960 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.430984 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431009 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431027 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431076 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431108 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431129 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431157 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431189 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431202 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431290 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431330 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431376 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431438 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431465 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431488 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431625 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431651 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431673 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431693 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431712 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431752 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431785 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431806 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431848 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431868 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431909 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431938 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431958 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431993 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432043 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432061 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432174 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432203 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432232 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432253 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432273 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432291 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432359 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432380 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432403 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432432 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432454 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432797 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432874 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432901 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432970 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432993 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433051 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433108 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433151 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433174 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433901 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433980 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434041 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434073 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434104 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434140 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434168 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434200 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434230 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434997 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435031 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435070 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435095 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435117 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435137 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435157 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435180 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435202 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435222 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435242 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435261 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435283 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435306 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435327 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435346 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435378 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435399 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435420 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435441 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435462 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435481 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435506 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435526 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435547 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435564 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435586 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435632 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435659 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435690 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435722 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435760 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435797 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435826 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435859 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435883 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435923 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435956 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435986 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436017 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436049 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436128 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436188 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.425839 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.427184 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436400 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436530 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436630 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436678 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436718 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436749 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436780 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436809 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436837 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436882 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436982 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437035 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437259 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437284 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429499 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.429718 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.440016 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431689 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.431889 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432383 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432723 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432944 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.432962 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433159 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433328 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433386 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433616 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433638 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433658 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441314 4847 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441699 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442755 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433857 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433890 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.433904 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434008 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434139 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434153 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434834 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434931 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.434999 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435130 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435139 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435238 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435466 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435515 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435571 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435698 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435867 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.435957 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436033 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436281 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.436339 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436561 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436573 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436624 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436700 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436807 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436831 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436864 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.436143 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437145 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437324 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437484 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437507 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.437984 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438144 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438232 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438460 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438509 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438288 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438509 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438582 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.438780 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.439028 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:25:46.93900572 +0000 UTC m=+20.316356662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.439334 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.439783 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.440220 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.440503 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.440834 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.440937 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441039 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441242 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441256 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441263 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441303 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441314 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441353 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441517 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441563 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.441925 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442038 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442191 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442569 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442610 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442663 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442708 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.442924 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443030 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443050 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443435 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443472 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443782 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443784 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443820 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.443872 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.444436 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.446640 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.447157 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.447204 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.447360 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.447685 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.450746 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.450755 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.450995 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.451086 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.451247 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.451676 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.451829 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452032 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.452079 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:46.952051116 +0000 UTC m=+20.329402058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452370 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452541 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452751 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452830 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.452871 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.453027 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.453697 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.454260 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455016 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455023 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455371 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455433 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455665 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455791 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.455932 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456075 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456320 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456430 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456451 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456743 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456807 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.456863 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.457298 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.457922 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458011 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458259 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458521 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458680 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458773 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.458917 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459057 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459279 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459356 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459405 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459424 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459634 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.459688 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.459726 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.459794 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:46.959778883 +0000 UTC m=+20.337129825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.460064 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.460158 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.460555 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.460723 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461066 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461112 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461366 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461371 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461472 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461506 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461641 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461670 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461906 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461917 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.461962 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.462108 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.462210 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.463126 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.463375 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.463444 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.463689 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.466118 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.466134 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.466502 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.466517 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.467023 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.467465 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.467439 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.468718 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.472218 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.472366 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.472743 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473054 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473124 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473129 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473237 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473328 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473638 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.473885 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.474203 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.474352 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.474490 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.474840 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.474858 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475228 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475250 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475262 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475321 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:46.975302759 +0000 UTC m=+20.352653701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.475356 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475424 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475436 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475447 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.475451 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.475474 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:46.975466663 +0000 UTC m=+20.352817605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.475702 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.476669 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.476964 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.477830 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.478153 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.478402 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.481692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.483795 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.483937 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.485690 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.488774 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.489044 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.490093 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.490152 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.496989 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.503589 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.515270 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.518060 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.519124 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.520187 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832" exitCode=255 Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.520254 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832"} Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.533103 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.535580 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.536663 4847 scope.go:117] "RemoveContainer" containerID="a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538085 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538382 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538443 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538461 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538472 4847 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538482 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538491 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538500 4847 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538510 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538518 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538526 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538536 4847 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538295 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538546 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538586 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538625 4847 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538639 4847 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538651 4847 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538662 4847 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538673 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538684 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538696 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538707 4847 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538720 4847 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538732 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538743 4847 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538757 4847 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538934 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538957 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538974 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538986 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538998 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539009 4847 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539019 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539030 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539040 4847 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539050 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539060 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539070 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539081 4847 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539094 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539104 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539116 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539128 4847 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539138 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539149 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539162 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539173 4847 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539183 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539193 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539204 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539213 4847 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539223 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539234 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539243 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539267 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539279 4847 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539290 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539300 4847 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539311 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539321 4847 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539331 4847 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539342 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539352 4847 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539362 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539375 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539389 4847 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539400 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539413 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539425 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539438 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539450 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539463 4847 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539475 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539488 4847 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539503 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539518 4847 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539536 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539547 4847 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539557 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539568 4847 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539580 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539591 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539624 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539636 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539647 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539659 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539670 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539682 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539717 4847 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539729 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539760 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539772 4847 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539783 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539797 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539809 4847 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539820 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539832 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539843 4847 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539854 4847 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539866 4847 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539901 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539914 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539926 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539938 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539950 4847 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539962 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539973 4847 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539988 4847 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.539997 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540007 4847 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540016 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540027 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.538934 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540040 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540209 4847 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540227 4847 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540239 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540250 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540260 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540274 4847 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540285 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540295 4847 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540307 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540319 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540330 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540341 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540352 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540376 4847 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540392 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540403 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540413 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540424 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540435 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540446 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540457 4847 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540468 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540479 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540491 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540502 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540514 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540525 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540537 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540549 4847 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540560 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540571 4847 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540583 4847 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540614 4847 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540628 4847 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540639 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540649 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540662 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540673 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540685 4847 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540697 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540709 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540721 4847 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540732 4847 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540744 4847 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540754 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540765 4847 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540779 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540792 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540803 4847 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540815 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540825 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540862 4847 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540872 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540882 4847 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540891 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540900 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540909 4847 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540919 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540929 4847 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540939 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540950 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540961 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540970 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540983 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.540992 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541003 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541014 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541025 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541035 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541046 4847 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541058 4847 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541068 4847 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541079 4847 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541090 4847 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541101 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541113 4847 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541125 4847 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541136 4847 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541151 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541170 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.541184 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.545804 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.560784 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.578290 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.591597 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.600950 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.646149 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.655069 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.659954 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.750026 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:46 crc kubenswrapper[4847]: I0218 00:25:46.945080 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:46 crc kubenswrapper[4847]: E0218 00:25:46.945275 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:25:47.945224492 +0000 UTC m=+21.322575444 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.046529 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.046582 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.046635 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.046676 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046758 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046781 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046793 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046826 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046890 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046907 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046929 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046839 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:48.046824221 +0000 UTC m=+21.424175163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.046760 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.047069 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:48.047038036 +0000 UTC m=+21.424388978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.047117 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:48.047082457 +0000 UTC m=+21.424433399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.047144 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:48.047135718 +0000 UTC m=+21.424486660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.168638 4847 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.168833 4847 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.168880 4847 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169485 4847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169576 4847 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169643 4847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169683 4847 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169708 4847 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.169639 4847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.80:53202->38.102.83.80:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952f911b96f9e9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:25:28.719800809 +0000 UTC m=+2.097151771,LastTimestamp:2026-02-18 00:25:28.719800809 +0000 UTC m=+2.097151771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169775 4847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169805 4847 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169835 4847 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169864 4847 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.169583 4847 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.262694 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-18 00:20:46 +0000 UTC, rotation deadline is 2026-11-01 18:53:45.938587373 +0000 UTC Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.262799 4847 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6162h27m58.675790705s for next certificate rotation Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.348883 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:31:47.967673382 +0000 UTC Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.403295 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.403530 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.407041 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.407770 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.408456 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.409047 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.409618 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.410084 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.410681 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.411219 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.411885 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.412426 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.412935 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.413551 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.414357 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.418700 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.419285 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.420220 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.420764 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.421705 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.422098 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.423023 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.424014 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.424517 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.425109 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.425995 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.426880 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.426802 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.427701 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.428324 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.429487 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.430084 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.431072 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.431540 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.432153 4847 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.432689 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.434296 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.434851 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.435705 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.437398 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.438088 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.438362 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.439024 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.439680 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.440732 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.441183 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.442131 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.442761 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.443876 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.444354 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.445241 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.445750 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.446814 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.446845 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.447396 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.448286 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.448793 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.449324 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.450362 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.450866 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.459313 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.494957 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.535222 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.545043 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.545115 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a87ee1a71a7af955d9f010866a805bcaf31ea21036527b5ee49be1bd13d21c07"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.548241 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.550580 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.551220 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.551467 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a335923255bce5f22c0566dd38e627af8407cd8e65384d61455268d55189dcca"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.552829 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.552886 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.552898 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fe67f2bf3e1835bf1e5ffdd827724f279485a02aa3de63fad363e8ff627a1b6f"} Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.573662 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xsj47"] Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.574152 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.581131 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.581904 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582139 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582175 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582199 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582298 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582402 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4w5fp"] Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.582770 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.586897 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.587577 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.587779 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.623831 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.639367 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.651172 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.651223 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8h9v\" (UniqueName: \"kubernetes.io/projected/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-kube-api-access-g8h9v\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.651555 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-proxy-tls\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.651660 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-rootfs\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.658510 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.673819 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.686020 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.704830 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.721504 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.737532 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.751409 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752267 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptjlr\" (UniqueName: \"kubernetes.io/projected/1185a103-f769-4668-9fe0-099078aeb848-kube-api-access-ptjlr\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752323 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752363 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8h9v\" (UniqueName: \"kubernetes.io/projected/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-kube-api-access-g8h9v\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752475 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-proxy-tls\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752533 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-rootfs\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752572 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1185a103-f769-4668-9fe0-099078aeb848-hosts-file\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.752794 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-rootfs\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.753500 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-mcd-auth-proxy-config\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.758006 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-proxy-tls\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.766132 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.772550 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8h9v\" (UniqueName: \"kubernetes.io/projected/ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5-kube-api-access-g8h9v\") pod \"machine-config-daemon-xsj47\" (UID: \"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\") " pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.782397 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.809891 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.830975 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.846534 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.853867 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1185a103-f769-4668-9fe0-099078aeb848-hosts-file\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.853940 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptjlr\" (UniqueName: \"kubernetes.io/projected/1185a103-f769-4668-9fe0-099078aeb848-kube-api-access-ptjlr\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.854065 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1185a103-f769-4668-9fe0-099078aeb848-hosts-file\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.857895 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.876511 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptjlr\" (UniqueName: \"kubernetes.io/projected/1185a103-f769-4668-9fe0-099078aeb848-kube-api-access-ptjlr\") pod \"node-resolver-4w5fp\" (UID: \"1185a103-f769-4668-9fe0-099078aeb848\") " pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.898089 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.905080 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4w5fp" Feb 18 00:25:47 crc kubenswrapper[4847]: W0218 00:25:47.922724 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec351c0c_107b_4bfd_ae6b_1e6ae2c22bd5.slice/crio-db4a3f9dbb48bd031aec9568639cd4ab9a64d29b3e185f6532617454e49b1fde WatchSource:0}: Error finding container db4a3f9dbb48bd031aec9568639cd4ab9a64d29b3e185f6532617454e49b1fde: Status 404 returned error can't find the container with id db4a3f9dbb48bd031aec9568639cd4ab9a64d29b3e185f6532617454e49b1fde Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.954317 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:47 crc kubenswrapper[4847]: E0218 00:25:47.954587 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:25:49.95450493 +0000 UTC m=+23.331855972 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.982247 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wfg4t"] Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.983513 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wprf4"] Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.983535 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.984380 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wprf4" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.985828 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.986064 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.987779 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.988128 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.988332 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.988624 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 00:25:47 crc kubenswrapper[4847]: I0218 00:25:47.990100 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.011889 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.035545 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.052296 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.054875 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-os-release\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.054916 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-os-release\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.054939 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cni-binary-copy\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.054960 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-k8s-cni-cncf-io\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055046 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp6tx\" (UniqueName: \"kubernetes.io/projected/f2eb9a65-88b5-49d1-885a-98c60c1283b4-kube-api-access-zp6tx\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055108 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-binary-copy\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055171 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055211 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-system-cni-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055252 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cnibin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055349 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-conf-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055432 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055501 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055566 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-system-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055618 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-socket-dir-parent\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055653 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-netns\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055688 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cnibin\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055715 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-bin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.055725 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055743 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-daemon-config\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055775 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.055847 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:50.055798302 +0000 UTC m=+23.433149244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055896 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-etc-kubernetes\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.055916 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055923 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-kubelet\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.055940 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.055957 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.055942 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-multus-certs\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056076 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056108 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:50.056087679 +0000 UTC m=+23.433438711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056150 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ld5\" (UniqueName: \"kubernetes.io/projected/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-kube-api-access-n6ld5\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056184 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056201 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056212 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056246 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:50.056236462 +0000 UTC m=+23.433587514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056205 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056305 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-multus\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056331 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056331 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-hostroot\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.056371 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:50.056359645 +0000 UTC m=+23.433710657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.056387 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.072387 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.077585 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.110810 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.137484 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.154849 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160456 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160514 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-system-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160540 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-socket-dir-parent\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160564 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-netns\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160585 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cnibin\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160623 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-bin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160648 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-daemon-config\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160682 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-etc-kubernetes\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160687 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-socket-dir-parent\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160711 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ld5\" (UniqueName: \"kubernetes.io/projected/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-kube-api-access-n6ld5\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160732 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-kubelet\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160752 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-multus-certs\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160772 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-multus\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160796 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-hostroot\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160836 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160864 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-os-release\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160883 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-system-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160891 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-os-release\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160918 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cni-binary-copy\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160946 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-k8s-cni-cncf-io\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160973 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp6tx\" (UniqueName: \"kubernetes.io/projected/f2eb9a65-88b5-49d1-885a-98c60c1283b4-kube-api-access-zp6tx\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.160995 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-binary-copy\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161018 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161041 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-system-cni-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cnibin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161081 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-conf-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161131 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-conf-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161163 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-kubelet\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161188 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-multus-certs\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161212 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-multus\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161237 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-hostroot\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161252 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-netns\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161285 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cnibin\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161281 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161312 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-var-lib-cni-bin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161390 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-etc-kubernetes\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161694 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-cni-dir\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.161934 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-os-release\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162008 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-os-release\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162049 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162065 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-multus-daemon-config\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162116 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-system-cni-dir\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162129 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cnibin\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162160 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f2eb9a65-88b5-49d1-885a-98c60c1283b4-host-run-k8s-cni-cncf-io\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162452 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-cni-binary-copy\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.162722 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f2eb9a65-88b5-49d1-885a-98c60c1283b4-cni-binary-copy\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.172056 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.180049 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.180449 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ld5\" (UniqueName: \"kubernetes.io/projected/94a6901a-92ec-4fd6-8ee3-ff3e6971c003-kube-api-access-n6ld5\") pod \"multus-additional-cni-plugins-wfg4t\" (UID: \"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\") " pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.186667 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp6tx\" (UniqueName: \"kubernetes.io/projected/f2eb9a65-88b5-49d1-885a-98c60c1283b4-kube-api-access-zp6tx\") pod \"multus-wprf4\" (UID: \"f2eb9a65-88b5-49d1-885a-98c60c1283b4\") " pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.186927 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.206661 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.220681 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.225078 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.234977 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.235715 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.241300 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.250187 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.264238 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.283087 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.296538 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.307939 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.310084 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.315254 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wprf4" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.319201 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.320828 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.334592 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.346718 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.349062 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:21:42.315059237 +0000 UTC Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.359021 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: W0218 00:25:48.378038 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94a6901a_92ec_4fd6_8ee3_ff3e6971c003.slice/crio-985b9f025e9510eee84306dd6c5084d8ce9f8ca64ad84eb7650790bef181d076 WatchSource:0}: Error finding container 985b9f025e9510eee84306dd6c5084d8ce9f8ca64ad84eb7650790bef181d076: Status 404 returned error can't find the container with id 985b9f025e9510eee84306dd6c5084d8ce9f8ca64ad84eb7650790bef181d076 Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.403641 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.403684 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.403785 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:48 crc kubenswrapper[4847]: E0218 00:25:48.403860 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.418037 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxm6w"] Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.418845 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.422375 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.423220 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.423461 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.423907 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.424401 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.424529 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.428165 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.446140 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464547 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464627 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464657 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464697 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464727 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464756 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjwgx\" (UniqueName: \"kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464804 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464831 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464858 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464886 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464914 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.464938 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465056 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465086 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465109 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465132 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465161 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465266 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465320 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.465349 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.466070 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: W0218 00:25:48.477794 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2eb9a65_88b5_49d1_885a_98c60c1283b4.slice/crio-443c6f42b676fdb5d38929804a802a71b4284165d268ae7c116b88ec27a51623 WatchSource:0}: Error finding container 443c6f42b676fdb5d38929804a802a71b4284165d268ae7c116b88ec27a51623: Status 404 returned error can't find the container with id 443c6f42b676fdb5d38929804a802a71b4284165d268ae7c116b88ec27a51623 Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.484793 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.511024 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.518867 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.530512 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.534791 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.554304 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.554426 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.559087 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerStarted","Data":"985b9f025e9510eee84306dd6c5084d8ce9f8ca64ad84eb7650790bef181d076"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.560496 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerStarted","Data":"443c6f42b676fdb5d38929804a802a71b4284165d268ae7c116b88ec27a51623"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567788 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567834 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567862 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567922 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567947 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.567968 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568001 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568026 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568058 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjwgx\" (UniqueName: \"kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568098 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568127 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568148 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568171 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568191 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568211 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568231 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568254 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568275 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568273 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568299 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568369 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568418 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568449 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.568742 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569020 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569062 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569089 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569115 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569138 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569141 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569182 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569208 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569234 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569588 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569649 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569660 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.569676 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.570307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.570341 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.570352 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"db4a3f9dbb48bd031aec9568639cd4ab9a64d29b3e185f6532617454e49b1fde"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.572697 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.573780 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.578063 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4w5fp" event={"ID":"1185a103-f769-4668-9fe0-099078aeb848","Type":"ContainerStarted","Data":"e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.578102 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4w5fp" event={"ID":"1185a103-f769-4668-9fe0-099078aeb848","Type":"ContainerStarted","Data":"f04474b42fa68a01838426f453851a967cb605d39831e3e85dd455b35f7bdcf6"} Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.584330 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjwgx\" (UniqueName: \"kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx\") pod \"ovnkube-node-bxm6w\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.589493 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.613233 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.625447 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.628766 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.649558 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.650587 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.654361 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.669961 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.686483 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.713035 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.726634 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.729337 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.738179 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: W0218 00:25:48.743722 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e5946b_870b_46f1_8923_4a8abd64da45.slice/crio-ba197af23cd9f498052e30f3f139c9da9f3d2b0a16a84678546f2b24ddbbd5e8 WatchSource:0}: Error finding container ba197af23cd9f498052e30f3f139c9da9f3d2b0a16a84678546f2b24ddbbd5e8: Status 404 returned error can't find the container with id ba197af23cd9f498052e30f3f139c9da9f3d2b0a16a84678546f2b24ddbbd5e8 Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.750798 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.765017 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.777789 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.789649 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.804048 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.826527 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.839335 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.874639 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:48 crc kubenswrapper[4847]: I0218 00:25:48.918198 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.227166 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.234252 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.236759 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.245966 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.262580 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.276582 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.288759 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.307867 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.319279 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.332037 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.344826 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.350732 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:41:58.480590419 +0000 UTC Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.359548 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.383413 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.402514 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.403895 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:49 crc kubenswrapper[4847]: E0218 00:25:49.404129 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.416503 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.454521 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.497706 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.536218 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.574891 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.583858 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerStarted","Data":"61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6"} Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.585658 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" exitCode=0 Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.585702 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.585754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"ba197af23cd9f498052e30f3f139c9da9f3d2b0a16a84678546f2b24ddbbd5e8"} Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.587512 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153" exitCode=0 Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.587635 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153"} Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.595462 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-d9clg"] Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.596168 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: E0218 00:25:49.615851 4847 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.635067 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.646619 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.664842 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.680106 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92gs\" (UniqueName: \"kubernetes.io/projected/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-kube-api-access-v92gs\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.680137 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-host\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.680200 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-serviceca\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.685166 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.704784 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.779528 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.784246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92gs\" (UniqueName: \"kubernetes.io/projected/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-kube-api-access-v92gs\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.784294 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-host\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.784328 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-serviceca\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.785354 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-serviceca\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.785428 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-host\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.814582 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92gs\" (UniqueName: \"kubernetes.io/projected/b10e15ef-4ac4-4ad4-9b20-e005f4b3d484-kube-api-access-v92gs\") pod \"node-ca-d9clg\" (UID: \"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\") " pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.836000 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.861589 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.895965 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.936934 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.948175 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d9clg" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.973058 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:49 crc kubenswrapper[4847]: I0218 00:25:49.986650 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:49 crc kubenswrapper[4847]: E0218 00:25:49.994281 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:25:53.994252089 +0000 UTC m=+27.371603041 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.015005 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.052997 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.093175 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.093495 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.093584 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.093662 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.093736 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:54.093714327 +0000 UTC m=+27.471065459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.093802 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.094069 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.093923 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094169 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094184 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094245 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:54.094227839 +0000 UTC m=+27.471578781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094031 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094277 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094286 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094305 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:54.094299501 +0000 UTC m=+27.471650443 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094143 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.094331 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:25:54.094326121 +0000 UTC m=+27.471677063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.139412 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.173098 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.219923 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.256463 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.293729 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.334439 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.351775 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:11:46.390704346 +0000 UTC Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.374245 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.403808 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.403921 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.404024 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:50 crc kubenswrapper[4847]: E0218 00:25:50.404179 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.413330 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.453001 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.493848 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.530536 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.572789 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.591615 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d9clg" event={"ID":"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484","Type":"ContainerStarted","Data":"287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.591662 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d9clg" event={"ID":"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484","Type":"ContainerStarted","Data":"1778b1718257269446f6952830c19818920256e2a003fc923570f09633050fd5"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.592547 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.594432 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86" exitCode=0 Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.594457 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597843 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597895 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597913 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597927 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597939 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.597952 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.626620 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.655331 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.692426 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.733692 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.773276 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.815476 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.855830 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.903949 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.938755 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:50 crc kubenswrapper[4847]: I0218 00:25:50.973538 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:50Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.016755 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.055365 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.095426 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.141330 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.176901 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.216079 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.261280 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.298410 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.337783 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.352783 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:19:06.418056899 +0000 UTC Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.403538 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:51 crc kubenswrapper[4847]: E0218 00:25:51.403793 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.603899 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e" exitCode=0 Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.603994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e"} Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.631510 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.654779 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.671501 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.690147 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.710309 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.730115 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.745470 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.758864 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.771061 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.787737 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.809221 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.827232 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.854051 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.895379 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:51 crc kubenswrapper[4847]: I0218 00:25:51.937990 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:51Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.353475 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 04:00:43.37353863 +0000 UTC Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.403178 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.403285 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:52 crc kubenswrapper[4847]: E0218 00:25:52.403357 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:52 crc kubenswrapper[4847]: E0218 00:25:52.403491 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.614153 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb" exitCode=0 Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.614225 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb"} Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.619022 4847 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.621881 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.621952 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.621989 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.622003 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.622148 4847 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.632278 4847 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.632797 4847 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.634772 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.634847 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.634867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.634897 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.634921 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:52Z","lastTransitionTime":"2026-02-18T00:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.639760 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.659501 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: E0218 00:25:52.658216 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.668644 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.668728 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.668756 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.668792 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.668819 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:52Z","lastTransitionTime":"2026-02-18T00:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.686679 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: E0218 00:25:52.691843 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.698662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.698741 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.698766 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.698798 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.698822 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:52Z","lastTransitionTime":"2026-02-18T00:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.716960 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: E0218 00:25:52.717913 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.721180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.721200 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.721208 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.721223 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:52 crc kubenswrapper[4847]: I0218 00:25:52.721233 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:52Z","lastTransitionTime":"2026-02-18T00:25:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.261518 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: E0218 00:25:53.262071 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:52Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.265510 4847 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.267923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.267951 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.267964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.267984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.268001 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.278716 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: E0218 00:25:53.284416 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: E0218 00:25:53.284591 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.286806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.286846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.286861 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.286882 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.286898 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.300654 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.319487 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.338320 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.354480 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:38:22.923076789 +0000 UTC Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.357594 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.374423 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.389466 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.390292 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.390341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.390353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.390371 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.390383 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.403712 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:53 crc kubenswrapper[4847]: E0218 00:25:53.403982 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.406740 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.433409 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.447681 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.493932 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.494254 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.494360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.494463 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.494557 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.597377 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.597682 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.597775 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.597863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.597930 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.629791 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerStarted","Data":"a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.647114 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.669197 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.687688 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.701512 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.701551 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.701562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.701585 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.701617 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.705628 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.731301 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.749662 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.765746 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.781020 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.801005 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.804655 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.804701 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.804714 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.804732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.804744 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.822662 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.837913 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.855354 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.868050 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.882567 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.898835 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:53Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.907843 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.907879 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.907890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.907904 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:53 crc kubenswrapper[4847]: I0218 00:25:53.907915 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:53Z","lastTransitionTime":"2026-02-18T00:25:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.011233 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.011271 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.011280 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.011297 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.011307 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.037077 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.037506 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:26:02.037474008 +0000 UTC m=+35.414824990 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.114954 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.115363 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.115533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.115711 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.115859 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.138813 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.139041 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139162 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.139334 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139338 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139661 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139823 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139391 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:02.139362804 +0000 UTC m=+35.516713746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.139455 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140243 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140251 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140266 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140318 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:02.140309877 +0000 UTC m=+35.517660809 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140345 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:02.140324047 +0000 UTC m=+35.517674999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.140372 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:02.140361688 +0000 UTC m=+35.517712640 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.140210 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.219346 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.220022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.220185 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.220345 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.220495 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.323673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.324037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.324219 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.324354 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.324495 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.355382 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:49:05.533557646 +0000 UTC Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.403312 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.403411 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.403448 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:54 crc kubenswrapper[4847]: E0218 00:25:54.403876 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.426805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.426862 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.426882 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.426917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.426938 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.529869 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.529922 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.529940 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.529966 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.529984 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.632435 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.632493 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.632511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.632537 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.632556 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.639077 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea" exitCode=0 Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.639140 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.650220 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.650651 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.650695 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.650960 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.655334 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.669410 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.679880 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.680132 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.692650 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.707291 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736639 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736683 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736693 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736710 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736722 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.736794 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.758277 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.776594 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.802526 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.820407 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.839433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.839499 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.839511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.839526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.839537 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.852074 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.866613 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.880926 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.898307 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.912102 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.924508 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.936711 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.941230 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.941251 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.941259 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.941273 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.941282 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:54Z","lastTransitionTime":"2026-02-18T00:25:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.951067 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.966801 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.981779 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:54 crc kubenswrapper[4847]: I0218 00:25:54.992209 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:54Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.004355 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.014993 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.026917 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.043181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.043251 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.043270 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.043295 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.043312 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.048443 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.063063 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.081280 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.095093 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.107809 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.125880 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.139939 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.145327 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.145371 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.145382 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.145400 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.145413 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.247910 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.248489 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.248502 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.248524 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.248539 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.351495 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.351533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.351544 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.351573 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.351587 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.356805 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 18:20:07.208891848 +0000 UTC Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.404050 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:55 crc kubenswrapper[4847]: E0218 00:25:55.404265 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.454011 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.454057 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.454066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.454083 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.454095 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.557673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.557721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.557733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.557750 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.557762 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.657139 4847 generic.go:334] "Generic (PLEG): container finished" podID="94a6901a-92ec-4fd6-8ee3-ff3e6971c003" containerID="b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131" exitCode=0 Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.657240 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerDied","Data":"b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.659404 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.659433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.659442 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.659458 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.659468 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.681174 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.700759 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.715309 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.732313 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.748392 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.762656 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.762719 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.762738 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.762760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.762777 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.775289 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.792898 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.804898 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.821255 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.847965 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.863083 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.865013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.865106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.865119 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.865136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.865145 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.882653 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.939323 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.966987 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.967025 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.967038 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.967055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.967069 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:55Z","lastTransitionTime":"2026-02-18T00:25:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.968628 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:55 crc kubenswrapper[4847]: I0218 00:25:55.978614 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.069549 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.069594 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.069634 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.069651 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.069664 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.172185 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.172488 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.172563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.172652 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.172717 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.274514 4847 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.275971 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.276087 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.276167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.276287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.276371 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.357918 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:30:02.640388584 +0000 UTC Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.378695 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.378733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.378914 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.378930 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.378939 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.404212 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.404244 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:56 crc kubenswrapper[4847]: E0218 00:25:56.404334 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:56 crc kubenswrapper[4847]: E0218 00:25:56.404730 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.481375 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.481408 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.481419 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.481452 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.481464 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.584279 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.584337 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.584357 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.584384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.584407 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.665908 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" event={"ID":"94a6901a-92ec-4fd6-8ee3-ff3e6971c003","Type":"ContainerStarted","Data":"408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.686663 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.687096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.687147 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.687166 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.687192 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.687219 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.713928 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.731376 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.749090 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.772452 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.790034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.790109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.790126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.790153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.790172 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.794395 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.812493 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.833750 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.859191 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.872091 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.888724 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.892228 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.892280 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.892293 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.892313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.892325 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.922106 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.944909 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.972594 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.991484 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:56Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.994293 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.994333 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.994343 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.994357 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:56 crc kubenswrapper[4847]: I0218 00:25:56.994368 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:56Z","lastTransitionTime":"2026-02-18T00:25:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.096868 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.096915 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.096927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.096945 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.096956 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.199065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.199114 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.199127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.199146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.199157 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.301352 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.301401 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.301417 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.301437 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.301451 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.359043 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:49:04.439719397 +0000 UTC Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.403451 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:57 crc kubenswrapper[4847]: E0218 00:25:57.403745 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.405730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.405846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.405972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.406066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.406147 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.418221 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.432755 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.470072 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.488418 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.502565 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.508252 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.508292 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.508310 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.508336 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.508352 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.524998 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.538786 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.553817 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.573980 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.592196 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.607927 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.610474 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.610519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.610531 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.610548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.610560 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.633254 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.661431 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.680736 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.697781 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.713480 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.713740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.713755 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.713778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.713794 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.818170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.818230 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.818248 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.818272 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.818284 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.928281 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.928343 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.928364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.928391 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:57 crc kubenswrapper[4847]: I0218 00:25:57.928410 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:57Z","lastTransitionTime":"2026-02-18T00:25:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.031657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.031713 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.031726 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.031746 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.031758 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.135178 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.135235 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.135252 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.135274 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.135289 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.238272 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.238349 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.238364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.238383 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.238398 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.342032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.342084 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.342093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.342109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.342120 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.359703 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 23:51:39.774612618 +0000 UTC Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.403553 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.403660 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:25:58 crc kubenswrapper[4847]: E0218 00:25:58.403777 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:25:58 crc kubenswrapper[4847]: E0218 00:25:58.404041 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.444819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.444881 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.444899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.444923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.444943 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.548533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.548589 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.548622 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.548640 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.548652 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.650866 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.650920 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.650934 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.650952 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.650969 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.674351 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/0.log" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.679718 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f" exitCode=1 Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.679765 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.680494 4847 scope.go:117] "RemoveContainer" containerID="c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.703586 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.723630 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.749732 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.759277 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.759316 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.759326 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.759341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.759351 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.770125 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.794467 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.827651 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.847442 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.861618 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.862109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.862148 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.862156 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.862170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.862180 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.875036 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.892297 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.923142 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:57Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI0218 00:25:57.926582 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:25:57.926627 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:25:57.926659 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:25:57.926675 6113 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 00:25:57.926796 6113 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:25:57.927705 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:25:57.927728 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:25:57.927775 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:25:57.927818 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:25:57.927841 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:25:57.927868 6113 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:25:57.927901 6113 factory.go:656] Stopping watch factory\\\\nI0218 00:25:57.927927 6113 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:25:57.927941 6113 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.944432 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.960085 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.965304 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.965344 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.965358 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.965383 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.965400 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:58Z","lastTransitionTime":"2026-02-18T00:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:58 crc kubenswrapper[4847]: I0218 00:25:58.976511 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:58Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.006087 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.068001 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.068049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.068061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.068078 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.068089 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.170284 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.170319 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.170335 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.170350 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.170360 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.273210 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.273278 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.273288 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.273301 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.273313 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.360001 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:58:54.431169243 +0000 UTC Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.375787 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.375821 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.375830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.375843 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.375854 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.403908 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:25:59 crc kubenswrapper[4847]: E0218 00:25:59.404081 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.478355 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.478416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.478433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.478459 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.478478 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.580635 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.580668 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.580678 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.580692 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.580703 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.682225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.682254 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.682264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.682288 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.682298 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.684588 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/0.log" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.686906 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.687913 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.699244 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.707759 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.726653 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.738028 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.751125 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.767367 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.780690 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.784878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.784918 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.784927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.784946 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.784955 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.791558 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.803768 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.818299 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.833177 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.843649 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.855412 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.880066 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:57Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI0218 00:25:57.926582 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:25:57.926627 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:25:57.926659 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:25:57.926675 6113 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 00:25:57.926796 6113 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:25:57.927705 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:25:57.927728 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:25:57.927775 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:25:57.927818 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:25:57.927841 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:25:57.927868 6113 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:25:57.927901 6113 factory.go:656] Stopping watch factory\\\\nI0218 00:25:57.927927 6113 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:25:57.927941 6113 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.887244 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.887287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.887299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.887313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.887322 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.895025 4847 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.899236 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.989867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.989895 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.989905 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.989916 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:25:59 crc kubenswrapper[4847]: I0218 00:25:59.989925 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:25:59Z","lastTransitionTime":"2026-02-18T00:25:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.092412 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.092458 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.092470 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.092487 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.092498 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.194430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.194685 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.194819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.194920 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.194987 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.245339 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.263970 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.282908 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.297547 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.297581 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.297589 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.297621 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.297635 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.315494 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:57Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI0218 00:25:57.926582 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:25:57.926627 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:25:57.926659 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:25:57.926675 6113 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 00:25:57.926796 6113 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:25:57.927705 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:25:57.927728 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:25:57.927775 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:25:57.927818 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:25:57.927841 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:25:57.927868 6113 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:25:57.927901 6113 factory.go:656] Stopping watch factory\\\\nI0218 00:25:57.927927 6113 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:25:57.927941 6113 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.330705 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.343892 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.355369 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.360323 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:58:55.700655674 +0000 UTC Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.370615 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.381871 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.392799 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.399317 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.399353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.399362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.399377 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.399388 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.404146 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:00 crc kubenswrapper[4847]: E0218 00:26:00.404300 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.404153 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:00 crc kubenswrapper[4847]: E0218 00:26:00.404488 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.405960 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.419143 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.435374 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.454703 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.472657 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.485052 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.501264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.501360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.501379 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.501409 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.501426 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.604338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.604411 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.604437 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.604468 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.604499 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.695356 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/1.log" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.696761 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/0.log" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.700639 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f" exitCode=1 Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.700726 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.700969 4847 scope.go:117] "RemoveContainer" containerID="c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.701791 4847 scope.go:117] "RemoveContainer" containerID="a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f" Feb 18 00:26:00 crc kubenswrapper[4847]: E0218 00:26:00.702055 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.706844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.707091 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.707253 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.707406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.707596 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.715173 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.731085 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.745692 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.761624 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.777461 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.789736 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.803579 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.810194 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.810238 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.810251 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.810271 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.810283 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.833075 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.847003 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.865621 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.881342 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.903490 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.913106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.913299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.913390 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.913477 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.913635 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:00Z","lastTransitionTime":"2026-02-18T00:26:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.929482 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:57Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI0218 00:25:57.926582 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:25:57.926627 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:25:57.926659 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:25:57.926675 6113 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 00:25:57.926796 6113 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:25:57.927705 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:25:57.927728 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:25:57.927775 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:25:57.927818 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:25:57.927841 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:25:57.927868 6113 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:25:57.927901 6113 factory.go:656] Stopping watch factory\\\\nI0218 00:25:57.927927 6113 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:25:57.927941 6113 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.948068 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:00 crc kubenswrapper[4847]: I0218 00:26:00.964969 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:00Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.016017 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.016055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.016064 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.016080 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.016089 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.079777 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl"] Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.080273 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.082251 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.083118 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.093630 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.114425 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.118793 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.118835 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.118844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.118860 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.118869 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.134906 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.148489 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.164509 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.197381 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c53f9fbf42d25da2e1f5c00d75702394a8a8b80db0ffa466037972c6eddf283f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:57Z\\\",\\\"message\\\":\\\"lient-go/informers/factory.go:160\\\\nI0218 00:25:57.926582 6113 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0218 00:25:57.926627 6113 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0218 00:25:57.926659 6113 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0218 00:25:57.926675 6113 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0218 00:25:57.926796 6113 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0218 00:25:57.927705 6113 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0218 00:25:57.927728 6113 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0218 00:25:57.927775 6113 handler.go:208] Removed *v1.Node event handler 2\\\\nI0218 00:25:57.927818 6113 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:25:57.927841 6113 handler.go:208] Removed *v1.Node event handler 7\\\\nI0218 00:25:57.927868 6113 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0218 00:25:57.927901 6113 factory.go:656] Stopping watch factory\\\\nI0218 00:25:57.927927 6113 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0218 00:25:57.927941 6113 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.209349 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-kube-api-access-h8l66\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.209430 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.209481 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.209517 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.210187 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.221501 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.221566 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.221586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.221639 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.221659 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.223926 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.233552 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.260050 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.278117 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.290235 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.302482 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.310137 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-kube-api-access-h8l66\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.310245 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.310362 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.310428 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.310840 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.311969 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.314703 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.319750 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.324766 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.324846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.324872 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.325061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.325100 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.327470 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8l66\" (UniqueName: \"kubernetes.io/projected/69d302fa-7d6b-4e4b-9dfe-71ed7d60b342-kube-api-access-h8l66\") pod \"ovnkube-control-plane-749d76644c-vk8bl\" (UID: \"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.328861 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.341669 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.360900 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 18:17:23.561849991 +0000 UTC Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.403809 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:01 crc kubenswrapper[4847]: E0218 00:26:01.404050 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.407913 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" Feb 18 00:26:01 crc kubenswrapper[4847]: W0218 00:26:01.423519 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69d302fa_7d6b_4e4b_9dfe_71ed7d60b342.slice/crio-384dd92ee5d182cb8ba27d02a0e497dfd66bc29ac4a54a8e297599e0a87a2fba WatchSource:0}: Error finding container 384dd92ee5d182cb8ba27d02a0e497dfd66bc29ac4a54a8e297599e0a87a2fba: Status 404 returned error can't find the container with id 384dd92ee5d182cb8ba27d02a0e497dfd66bc29ac4a54a8e297599e0a87a2fba Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.427340 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.427386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.427402 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.427451 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.427471 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.529777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.529813 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.529826 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.529842 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.529852 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.632414 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.632458 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.632473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.632495 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.632509 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.705149 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/1.log" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.708131 4847 scope.go:117] "RemoveContainer" containerID="a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f" Feb 18 00:26:01 crc kubenswrapper[4847]: E0218 00:26:01.708325 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.709251 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" event={"ID":"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342","Type":"ContainerStarted","Data":"38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.709296 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" event={"ID":"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342","Type":"ContainerStarted","Data":"448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.709307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" event={"ID":"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342","Type":"ContainerStarted","Data":"384dd92ee5d182cb8ba27d02a0e497dfd66bc29ac4a54a8e297599e0a87a2fba"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.729328 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.735658 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.736190 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.736204 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.736235 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.736252 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.751372 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.763665 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.777120 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.804438 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.818514 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.830325 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.840089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.840140 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.840152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.840171 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.840182 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.844360 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.856358 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.866155 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.882378 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.892420 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.903734 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.913435 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.930980 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.943115 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.943183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.943194 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.943213 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.943223 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:01Z","lastTransitionTime":"2026-02-18T00:26:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.944795 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.958867 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.970556 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:01 crc kubenswrapper[4847]: I0218 00:26:01.985176 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:01Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.009443 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.029031 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.043932 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.045673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.045705 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.045716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.045733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.045743 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.060381 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.071939 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.084054 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.098035 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.110899 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.119700 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.119894 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.119862859 +0000 UTC m=+51.497213801 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.125139 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.139564 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.148530 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.148594 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.148624 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.148652 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.148665 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.155896 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.174072 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.198952 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.221317 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.221376 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.221407 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.221446 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221545 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221552 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221574 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221572 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221581 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221657 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221588 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221675 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221626 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.221593121 +0000 UTC m=+51.598944063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221729 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.221705824 +0000 UTC m=+51.599056766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221741 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.221736314 +0000 UTC m=+51.599087256 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.221758 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.221751335 +0000 UTC m=+51.599102277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.251170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.251207 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.251217 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.251236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.251247 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.354584 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.354668 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.354684 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.354710 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.354730 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.361778 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 12:12:03.146049883 +0000 UTC Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.403323 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.403439 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.403543 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.403844 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.458180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.458227 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.458236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.458259 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.458281 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.562110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.562177 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.562190 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.562211 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.562230 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.592232 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5rg76"] Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.592763 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.592833 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.614221 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.632375 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.648795 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.665116 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.665212 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.665262 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.665289 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.665306 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.669665 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.695503 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.712250 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.727766 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnct\" (UniqueName: \"kubernetes.io/projected/1a7318b6-f24d-4785-bd56-ad5ecec493da-kube-api-access-svnct\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.727846 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.736737 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.748678 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.759885 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.767457 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.767526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.767543 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.767568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.767641 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.780800 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.793928 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.813723 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.828570 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnct\" (UniqueName: \"kubernetes.io/projected/1a7318b6-f24d-4785-bd56-ad5ecec493da-kube-api-access-svnct\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.828650 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.828828 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: E0218 00:26:02.828892 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:03.328874879 +0000 UTC m=+36.706225831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.834727 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.851811 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.854663 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnct\" (UniqueName: \"kubernetes.io/projected/1a7318b6-f24d-4785-bd56-ad5ecec493da-kube-api-access-svnct\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.875446 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.875487 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.875499 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.875517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.875530 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.877383 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.893872 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.911685 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:02Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.980043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.980084 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.980096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.980110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:02 crc kubenswrapper[4847]: I0218 00:26:02.980121 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:02Z","lastTransitionTime":"2026-02-18T00:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.082313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.082359 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.082372 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.082387 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.082399 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.185431 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.185474 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.185485 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.185502 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.185515 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.287272 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.287338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.287357 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.287389 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.287411 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.333310 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.333436 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.333516 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:04.333496333 +0000 UTC m=+37.710847285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.362867 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:54:38.673433051 +0000 UTC Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.391319 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.391385 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.391403 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.391427 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.391442 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.404182 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.404336 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.495236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.495333 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.495356 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.495388 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.495416 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.601588 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.601709 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.601730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.601763 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.601787 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.621267 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.621342 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.621362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.621394 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.621417 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.645484 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:03Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.652073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.652152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.652170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.652203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.652224 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.674940 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:03Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.682125 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.682188 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.682206 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.682233 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.682258 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.702492 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:03Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.708183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.708247 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.708265 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.708294 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.708313 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.729309 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:03Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.734517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.734595 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.734641 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.734675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.734698 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.754291 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:03Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:03 crc kubenswrapper[4847]: E0218 00:26:03.754515 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.756597 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.756673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.756690 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.756716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.756735 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.860117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.860173 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.860192 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.860218 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.860237 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.962526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.962563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.962572 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.962586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:03 crc kubenswrapper[4847]: I0218 00:26:03.962618 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:03Z","lastTransitionTime":"2026-02-18T00:26:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.065302 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.065353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.065364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.065381 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.065393 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.168499 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.168538 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.168553 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.168568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.168581 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.270517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.270569 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.270581 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.270630 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.270656 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.346971 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:04 crc kubenswrapper[4847]: E0218 00:26:04.347151 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:04 crc kubenswrapper[4847]: E0218 00:26:04.347246 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:06.347223499 +0000 UTC m=+39.724574461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.363084 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:03:22.935071923 +0000 UTC Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.373439 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.373477 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.373489 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.373506 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.373519 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.403150 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.403203 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.403273 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:04 crc kubenswrapper[4847]: E0218 00:26:04.403380 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:04 crc kubenswrapper[4847]: E0218 00:26:04.403518 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:04 crc kubenswrapper[4847]: E0218 00:26:04.403747 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.476086 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.476126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.476137 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.476154 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.476169 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.578587 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.578684 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.578703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.578728 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.578746 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.681731 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.681795 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.681808 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.681829 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.681841 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.784894 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.785001 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.785059 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.785086 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.785116 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.887257 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.887306 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.887314 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.887332 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.887343 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.989623 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.989683 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.989694 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.989711 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:04 crc kubenswrapper[4847]: I0218 00:26:04.989724 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:04Z","lastTransitionTime":"2026-02-18T00:26:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.093904 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.093978 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.093997 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.094026 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.094056 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.197752 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.197812 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.197831 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.197859 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.197880 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.301015 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.301384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.301511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.301666 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.301785 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.363342 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 01:58:02.08497191 +0000 UTC Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.403712 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:05 crc kubenswrapper[4847]: E0218 00:26:05.404010 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.404116 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.404181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.404203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.404236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.404260 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.507748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.507819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.507845 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.507874 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.507898 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.611215 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.611337 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.611361 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.611392 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.611412 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.713975 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.714075 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.714096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.714132 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.714155 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.817725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.817801 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.817821 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.817851 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.817874 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.921473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.921542 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.921563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.921594 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:05 crc kubenswrapper[4847]: I0218 00:26:05.921656 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:05Z","lastTransitionTime":"2026-02-18T00:26:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.024958 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.025032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.025050 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.025095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.025113 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.128199 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.128263 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.128275 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.128291 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.128302 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.231073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.231145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.231166 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.231192 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.231213 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.334819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.334886 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.334906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.334935 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.334956 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.364440 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:48:51.036865905 +0000 UTC Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.372199 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:06 crc kubenswrapper[4847]: E0218 00:26:06.372456 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:06 crc kubenswrapper[4847]: E0218 00:26:06.372769 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:10.372719793 +0000 UTC m=+43.750070785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.403750 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.403917 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:06 crc kubenswrapper[4847]: E0218 00:26:06.403964 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.404029 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:06 crc kubenswrapper[4847]: E0218 00:26:06.404142 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:06 crc kubenswrapper[4847]: E0218 00:26:06.404322 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.438705 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.438782 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.438802 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.438836 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.438857 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.542032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.542092 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.542108 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.542132 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.542150 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.645582 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.645646 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.645656 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.645672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.645682 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.748143 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.748244 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.748264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.748352 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.748388 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.851416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.851966 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.852154 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.852365 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.852637 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.957384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.957427 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.957437 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.957451 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:06 crc kubenswrapper[4847]: I0218 00:26:06.957460 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:06Z","lastTransitionTime":"2026-02-18T00:26:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.060189 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.060223 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.060233 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.060245 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.060255 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.162753 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.162815 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.162830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.162852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.162866 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.266203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.266239 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.266279 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.266296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.266309 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.364886 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 08:45:56.663236532 +0000 UTC Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.368755 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.368797 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.368810 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.368833 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.368846 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.403411 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:07 crc kubenswrapper[4847]: E0218 00:26:07.403513 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.419495 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.431772 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.443513 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.456816 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.471106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.471145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.471158 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.471176 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.471187 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.480240 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.494693 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.507232 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.524827 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.542339 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.557372 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.569310 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.574356 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.574393 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.574405 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.574422 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.574434 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.578646 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.597977 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.611161 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.628620 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.648711 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.673705 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:07Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.677378 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.677406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.677414 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.677427 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.677437 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.781441 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.781756 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.781838 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.781875 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.781896 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.884498 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.884579 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.884650 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.884693 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.884719 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.987451 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.987497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.987519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.987537 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:07 crc kubenswrapper[4847]: I0218 00:26:07.987549 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:07Z","lastTransitionTime":"2026-02-18T00:26:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.090023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.090055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.090063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.090078 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.090089 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.192127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.192182 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.192193 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.192209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.192221 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.294333 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.294384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.294397 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.294416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.294429 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.365561 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:37:54.534610524 +0000 UTC Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.396973 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.397010 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.397018 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.397031 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.397041 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.403636 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.403682 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.403641 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:08 crc kubenswrapper[4847]: E0218 00:26:08.403814 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:08 crc kubenswrapper[4847]: E0218 00:26:08.403912 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:08 crc kubenswrapper[4847]: E0218 00:26:08.404002 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.499459 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.499511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.499523 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.499539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.499548 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.601794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.601867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.601896 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.601916 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.601925 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.704181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.704258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.704283 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.704314 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.704339 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.807245 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.807292 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.807311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.807331 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.807346 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.910402 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.910792 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.910973 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.911163 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:08 crc kubenswrapper[4847]: I0218 00:26:08.911285 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:08Z","lastTransitionTime":"2026-02-18T00:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.015117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.015393 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.015526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.015697 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.015783 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.119089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.119145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.119159 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.119181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.119195 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.222455 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.222509 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.222526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.222550 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.222567 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.325975 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.326043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.326067 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.326096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.326118 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.366847 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:01:23.946165673 +0000 UTC Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.403281 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:09 crc kubenswrapper[4847]: E0218 00:26:09.403491 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.429047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.432113 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.432123 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.432163 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.432179 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.535771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.535818 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.535836 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.535859 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.535876 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.638896 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.639172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.639202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.639224 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.639234 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.741942 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.741979 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.741988 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.742002 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.742013 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.844621 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.844654 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.844663 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.844676 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.844687 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.947578 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.947644 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.947654 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.947670 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:09 crc kubenswrapper[4847]: I0218 00:26:09.947680 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:09Z","lastTransitionTime":"2026-02-18T00:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.050084 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.050160 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.050183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.050213 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.050237 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.153362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.153423 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.153440 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.153463 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.153481 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.255726 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.255773 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.255788 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.255806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.255818 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.358880 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.358921 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.358930 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.358945 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.358955 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.367451 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:19:10.058202828 +0000 UTC Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.403947 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.403972 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.404073 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:10 crc kubenswrapper[4847]: E0218 00:26:10.404227 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:10 crc kubenswrapper[4847]: E0218 00:26:10.404377 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:10 crc kubenswrapper[4847]: E0218 00:26:10.404509 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.415314 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:10 crc kubenswrapper[4847]: E0218 00:26:10.415476 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:10 crc kubenswrapper[4847]: E0218 00:26:10.415551 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:18.415532553 +0000 UTC m=+51.792883495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.461837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.461911 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.461926 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.461947 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.461961 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.564381 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.564672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.564764 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.564868 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.564962 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.668178 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.668246 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.668258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.668274 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.668286 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.770941 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.771009 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.771027 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.771053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.771072 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.874020 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.874130 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.874149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.874175 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.874190 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.979598 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.979707 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.979729 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.979759 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:10 crc kubenswrapper[4847]: I0218 00:26:10.979843 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:10Z","lastTransitionTime":"2026-02-18T00:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.081740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.081775 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.081784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.081799 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.081810 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.185302 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.185353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.185364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.185386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.185397 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.288052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.288086 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.288097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.288113 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.288174 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.368257 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:51:04.388295154 +0000 UTC Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.390050 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.390073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.390081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.390093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.390101 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.403892 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:11 crc kubenswrapper[4847]: E0218 00:26:11.403979 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.492493 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.492541 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.492552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.492570 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.492581 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.595167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.595200 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.595231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.595246 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.595254 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.697719 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.697760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.697768 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.697782 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.697798 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.800759 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.800830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.800852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.800884 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.800907 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.903052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.903077 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.903087 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.903100 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:11 crc kubenswrapper[4847]: I0218 00:26:11.903108 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:11Z","lastTransitionTime":"2026-02-18T00:26:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.005562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.005596 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.005624 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.005640 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.005650 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.109278 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.110280 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.110537 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.110754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.111006 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.214706 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.214745 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.214754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.214767 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.214777 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.317289 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.317326 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.317338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.317353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.317365 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.368354 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:37:41.280348449 +0000 UTC Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.404146 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.404243 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:12 crc kubenswrapper[4847]: E0218 00:26:12.404729 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:12 crc kubenswrapper[4847]: E0218 00:26:12.404744 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.404841 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:12 crc kubenswrapper[4847]: E0218 00:26:12.405331 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.419335 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.419366 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.419374 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.419389 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.419398 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.522312 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.522551 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.522636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.522747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.522823 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.625044 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.625081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.625091 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.625107 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.625117 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.726926 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.726958 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.726970 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.726986 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.726999 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.829851 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.830370 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.830439 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.830533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.830638 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.933698 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.933751 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.933768 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.933793 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:12 crc kubenswrapper[4847]: I0218 00:26:12.933809 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:12Z","lastTransitionTime":"2026-02-18T00:26:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.037067 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.037117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.037135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.037158 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.037177 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.139286 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.139334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.139343 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.139357 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.139367 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.241919 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.241965 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.241979 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.241995 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.242006 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.344784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.344842 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.344861 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.344884 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.344902 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.369535 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:44:05.740414286 +0000 UTC Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.404278 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:13 crc kubenswrapper[4847]: E0218 00:26:13.404450 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.446896 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.447087 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.447254 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.447318 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.447377 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.549096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.549480 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.549660 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.549886 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.550033 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.652196 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.652233 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.652241 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.652256 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.652266 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.754559 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.754632 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.754645 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.754663 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.754676 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.857063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.857393 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.857511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.857618 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.857742 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.902853 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.902889 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.902898 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.902918 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.902928 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: E0218 00:26:13.914535 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:13Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.918850 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.918870 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.918878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.918890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.918900 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: E0218 00:26:13.939738 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:13Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.943122 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.943149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.943159 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.943175 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.943187 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: E0218 00:26:13.965207 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:13Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.971790 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.971839 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.971848 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.971862 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.971872 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:13 crc kubenswrapper[4847]: E0218 00:26:13.988906 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:13Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.992217 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.992274 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.992289 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.992306 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:13 crc kubenswrapper[4847]: I0218 00:26:13.992319 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:13Z","lastTransitionTime":"2026-02-18T00:26:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: E0218 00:26:14.005530 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: E0218 00:26:14.005678 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.007032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.007070 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.007079 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.007094 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.007103 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.109066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.109135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.109149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.109165 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.109176 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.211789 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.211837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.211846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.211863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.211873 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.314117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.314163 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.314173 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.314187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.314199 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.370149 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:22:59.315398484 +0000 UTC Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.403680 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.403990 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:14 crc kubenswrapper[4847]: E0218 00:26:14.404153 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.404243 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:14 crc kubenswrapper[4847]: E0218 00:26:14.404750 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:14 crc kubenswrapper[4847]: E0218 00:26:14.404867 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.405327 4847 scope.go:117] "RemoveContainer" containerID="a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.416545 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.416572 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.416581 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.416593 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.416614 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.520189 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.520257 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.520277 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.520300 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.520318 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.625836 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.625887 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.625953 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.625984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.626068 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.728672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.728712 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.728720 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.728732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.728742 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.769181 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/1.log" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.771833 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.772348 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.794390 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.830970 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.831021 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.831033 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.831052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.831064 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.832256 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.843018 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.862825 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.880459 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.908523 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.922933 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.932674 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.932702 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.932711 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.932725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.932735 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:14Z","lastTransitionTime":"2026-02-18T00:26:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.943673 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.959441 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.971691 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.982284 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:14 crc kubenswrapper[4847]: I0218 00:26:14.994734 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:14Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.021710 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.035716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.035764 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.035776 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.035795 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.035807 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.037959 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.050584 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.066376 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.078068 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.138625 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.138657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.138668 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.138683 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.138695 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.241931 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.241972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.241980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.241993 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.242003 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.345739 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.345785 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.345801 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.345819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.345830 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.371380 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:30:55.05118296 +0000 UTC Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.403909 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:15 crc kubenswrapper[4847]: E0218 00:26:15.404061 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.448199 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.448237 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.448249 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.448263 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.448273 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.550942 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.550983 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.550992 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.551011 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.551021 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.654098 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.654136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.654146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.654162 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.654171 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.757420 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.757484 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.757503 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.757526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.757542 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.777725 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/2.log" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.778385 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/1.log" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.781756 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" exitCode=1 Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.781812 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.781876 4847 scope.go:117] "RemoveContainer" containerID="a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.782649 4847 scope.go:117] "RemoveContainer" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" Feb 18 00:26:15 crc kubenswrapper[4847]: E0218 00:26:15.782872 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.803444 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.840055 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a534bc47fbd4eb0d9b494619634888e4eb4493faae918f5792adebbb093e8e6f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:25:59Z\\\",\\\"message\\\":\\\" success event on pod openshift-machine-config-operator/machine-config-daemon-xsj47\\\\nI0218 00:25:59.741548 6266 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0218 00:25:59.741556 6266 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-wprf4\\\\nI0218 00:25:59.741559 6266 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0218 00:25:59.741561 6266 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:25:59Z is after 2025-08-24T17:21:41Z]\\\\nI0218 00:25:59.741563 6266 ovn.go:134] Ensuring zone local for Pod openshift-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.854231 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.860496 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.860529 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.860539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.860555 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.860567 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.870205 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.881587 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.896469 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.912279 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.922680 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.941487 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.953155 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.964361 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.964439 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.964460 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.964493 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.964515 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:15Z","lastTransitionTime":"2026-02-18T00:26:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.966335 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.977319 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:15 crc kubenswrapper[4847]: I0218 00:26:15.998159 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:15Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.017075 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.034823 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.051878 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.064881 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.067462 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.067520 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.067532 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.067549 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.067561 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.171987 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.172096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.172114 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.172135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.172151 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.275929 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.276003 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.276022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.276051 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.276071 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.371660 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:12:21.573610585 +0000 UTC Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.379732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.379804 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.379824 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.379851 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.379871 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.403414 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.403535 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.403581 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:16 crc kubenswrapper[4847]: E0218 00:26:16.403738 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:16 crc kubenswrapper[4847]: E0218 00:26:16.403647 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:16 crc kubenswrapper[4847]: E0218 00:26:16.403924 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.483820 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.483891 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.483917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.483952 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.483972 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.588024 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.588095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.588115 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.588149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.588171 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.692337 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.692396 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.692414 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.692439 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.692458 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.788489 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/2.log" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.792868 4847 scope.go:117] "RemoveContainer" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" Feb 18 00:26:16 crc kubenswrapper[4847]: E0218 00:26:16.793132 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.795964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.796023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.796043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.796069 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.796089 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.805593 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.818877 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.831451 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.863140 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.881788 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.897999 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.898803 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.898831 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.898839 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.898853 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.898878 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:16Z","lastTransitionTime":"2026-02-18T00:26:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.917123 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.933123 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.947098 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.964925 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:16 crc kubenswrapper[4847]: I0218 00:26:16.979583 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.000049 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:16Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.001429 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.001461 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.001472 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.001489 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.001500 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.017426 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.034388 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.057936 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.079288 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.093202 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.103987 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.104004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.104013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.104025 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.104035 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.206360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.206816 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.207052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.207362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.207584 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.312061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.312472 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.312860 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.313211 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.313563 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.371893 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:50:45.727026751 +0000 UTC Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.403452 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:17 crc kubenswrapper[4847]: E0218 00:26:17.403822 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.417704 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.418187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.418444 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.418796 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.419003 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.433328 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.450263 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.469327 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.494465 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.512468 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.522180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.522287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.522310 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.522339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.522360 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.530809 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.551259 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.567213 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.578060 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.590222 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.604797 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.620585 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.624738 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.624776 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.624788 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.624807 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.624818 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.634361 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.650069 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.664207 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.685437 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.699901 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:17Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.727365 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.727435 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.727453 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.727484 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.727502 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.830515 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.830556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.830569 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.830585 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.830611 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.933680 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.933776 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.933796 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.933832 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:17 crc kubenswrapper[4847]: I0218 00:26:17.933856 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:17Z","lastTransitionTime":"2026-02-18T00:26:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.037942 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.038053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.038080 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.038117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.038140 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.146146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.146239 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.146267 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.146299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.146323 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.202630 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.202862 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:26:50.202823512 +0000 UTC m=+83.580174484 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.248732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.248977 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.249072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.249144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.249206 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.304313 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.304770 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.304795 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.304814 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.304590 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.304956 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:50.304940904 +0000 UTC m=+83.682291846 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305163 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305178 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305191 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305220 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:50.30521118 +0000 UTC m=+83.682562122 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305520 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305531 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305538 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305560 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:50.305553288 +0000 UTC m=+83.682904230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.304897 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.305583 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:26:50.305577779 +0000 UTC m=+83.682928721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.351278 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.351332 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.351341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.351357 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.351368 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.373552 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:24:56.478529649 +0000 UTC Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.403143 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.403225 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.403162 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.403279 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.403351 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.403526 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.454036 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.454095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.454110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.454135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.454152 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.507111 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.507269 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: E0218 00:26:18.507357 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:26:34.507333852 +0000 UTC m=+67.884684804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.558180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.558236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.558245 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.558276 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.558289 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.662209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.662271 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.662289 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.662314 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.662334 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.764692 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.764735 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.764758 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.764781 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.764797 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.867389 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.867447 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.867466 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.867491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.867509 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.969843 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.969872 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.969879 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.969893 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:18 crc kubenswrapper[4847]: I0218 00:26:18.969904 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:18Z","lastTransitionTime":"2026-02-18T00:26:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.072829 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.072869 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.072880 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.072899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.072913 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.175743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.175791 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.175806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.175830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.175847 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.278501 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.278552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.278568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.278591 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.278651 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.374658 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:46:52.835320376 +0000 UTC Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.381366 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.381504 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.381587 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.381675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.381739 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.403177 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:19 crc kubenswrapper[4847]: E0218 00:26:19.403350 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.483691 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.483933 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.483996 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.484225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.484305 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.586955 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.587013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.587030 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.587054 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.587072 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.607919 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.617347 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.630786 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.644863 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.656912 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.672521 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.684394 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.689324 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.689363 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.689374 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.689386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.689395 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.695973 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.707403 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.718945 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.729264 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.740960 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.752216 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.765994 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.777733 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.789708 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.795178 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.795231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.795249 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.795272 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.795289 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.811793 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.838475 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.850180 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:19Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.898337 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.898396 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.898409 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.898430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:19 crc kubenswrapper[4847]: I0218 00:26:19.898441 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:19Z","lastTransitionTime":"2026-02-18T00:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.001302 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.001570 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.001705 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.001806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.001908 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.104959 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.105434 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.105689 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.105894 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.106107 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.209183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.209255 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.209275 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.209301 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.209319 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.312360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.312421 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.312438 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.312464 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.312480 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.376195 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:54:48.380278022 +0000 UTC Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.403566 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:20 crc kubenswrapper[4847]: E0218 00:26:20.403829 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.404082 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:20 crc kubenswrapper[4847]: E0218 00:26:20.404292 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.404413 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:20 crc kubenswrapper[4847]: E0218 00:26:20.404508 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.414275 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.414313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.414325 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.414339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.414350 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.517536 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.517581 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.517592 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.517629 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.517642 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.619850 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.619910 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.619927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.619951 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.619972 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.723777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.723844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.723866 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.723894 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.723919 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.826340 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.826416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.826447 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.826473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.826500 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.929826 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.929887 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.929905 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.929929 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:20 crc kubenswrapper[4847]: I0218 00:26:20.929948 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:20Z","lastTransitionTime":"2026-02-18T00:26:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.032997 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.033042 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.033051 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.033065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.033073 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.135512 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.135567 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.135586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.135638 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.135658 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.238752 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.238800 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.238814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.238832 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.238849 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.342258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.342302 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.342317 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.342336 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.342349 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.377161 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:01:07.148947159 +0000 UTC Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.403665 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:21 crc kubenswrapper[4847]: E0218 00:26:21.403817 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.445073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.445137 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.445160 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.445183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.445200 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.547754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.547800 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.547821 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.547845 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.547860 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.650299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.650354 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.650373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.650398 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.650419 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.753573 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.753639 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.753659 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.753677 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.753693 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.855701 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.855854 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.855898 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.855916 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.855929 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.958804 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.958900 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.958926 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.958958 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:21 crc kubenswrapper[4847]: I0218 00:26:21.958981 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:21Z","lastTransitionTime":"2026-02-18T00:26:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.063577 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.063703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.063730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.063788 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.063810 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.166778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.166848 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.166865 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.166890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.166908 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.269590 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.269689 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.269707 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.269734 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.269753 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.373260 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.373315 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.373334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.373359 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.373377 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.377546 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 14:27:50.626128411 +0000 UTC Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.403973 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.404025 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.404002 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:22 crc kubenswrapper[4847]: E0218 00:26:22.404168 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:22 crc kubenswrapper[4847]: E0218 00:26:22.404288 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:22 crc kubenswrapper[4847]: E0218 00:26:22.404435 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.476143 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.476220 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.476242 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.476272 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.476294 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.579728 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.579777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.579794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.579822 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.579840 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.683532 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.683648 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.683670 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.683696 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.683714 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.786409 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.786505 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.786529 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.786564 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.786584 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.889772 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.889844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.889868 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.889897 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.889920 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.994249 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.994320 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.994345 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.994372 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:22 crc kubenswrapper[4847]: I0218 00:26:22.994416 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:22Z","lastTransitionTime":"2026-02-18T00:26:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.097681 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.097773 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.097799 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.097829 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.097848 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.200869 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.200936 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.200955 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.200981 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.201000 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.304073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.304127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.304144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.304166 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.304182 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.378565 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 20:12:09.018842464 +0000 UTC Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.404097 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:23 crc kubenswrapper[4847]: E0218 00:26:23.404324 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.407274 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.407318 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.407335 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.407358 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.407376 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.509654 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.509709 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.509725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.509747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.509764 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.612999 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.613047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.613063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.613086 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.613107 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.716033 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.716081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.716096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.716118 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.716135 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.820288 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.820366 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.820388 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.820416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.820441 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.923438 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.923505 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.923527 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.923559 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:23 crc kubenswrapper[4847]: I0218 00:26:23.923581 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:23Z","lastTransitionTime":"2026-02-18T00:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.026444 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.026511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.026535 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.026563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.026584 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.130322 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.130384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.130406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.130431 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.130451 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.232490 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.232530 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.232542 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.232556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.232568 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.267895 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.267949 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.267959 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.267972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.267981 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.284437 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:24Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.292716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.292748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.292757 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.292771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.292780 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.309951 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:24Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.314769 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.314796 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.314805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.314819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.314828 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.325943 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:24Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.329968 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.330022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.330118 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.330167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.330185 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.341983 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:24Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.346695 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.346757 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.346779 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.346801 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.346819 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.364204 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:24Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.364432 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.366510 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.366635 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.366657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.366677 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.366694 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.379408 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:45:45.899342989 +0000 UTC Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.403838 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.403858 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.404054 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.404125 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.404181 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:24 crc kubenswrapper[4847]: E0218 00:26:24.404301 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.469586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.469688 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.469711 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.469743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.469769 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.571914 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.571969 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.571985 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.572007 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.572026 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.674010 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.674052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.674065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.674081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.674093 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.776239 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.776280 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.776290 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.776305 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.776314 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.879748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.879805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.879824 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.879849 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.879866 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.983059 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.983126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.983140 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.983158 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:24 crc kubenswrapper[4847]: I0218 00:26:24.983171 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:24Z","lastTransitionTime":"2026-02-18T00:26:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.085678 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.085740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.085759 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.085784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.085803 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.188975 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.189037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.189058 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.189081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.189099 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.291978 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.292038 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.292059 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.292083 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.292100 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.379718 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:57:10.853245773 +0000 UTC Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.395324 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.395377 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.395395 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.395418 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.395437 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.403748 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:25 crc kubenswrapper[4847]: E0218 00:26:25.403956 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.498406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.498498 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.498516 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.498539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.498555 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.601669 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.601716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.601731 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.601755 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.601772 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.705106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.705187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.705212 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.705243 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.705269 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.807876 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.807937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.807956 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.807982 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.808002 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.911520 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.911575 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.911590 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.911652 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:25 crc kubenswrapper[4847]: I0218 00:26:25.911670 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:25Z","lastTransitionTime":"2026-02-18T00:26:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.015182 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.015240 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.015256 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.015283 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.015300 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.118993 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.119040 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.119067 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.119089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.119104 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.222197 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.222260 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.222279 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.222304 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.222323 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.325912 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.326404 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.326740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.326931 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.327067 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.380202 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 12:46:57.948403465 +0000 UTC Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.404113 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.404167 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:26 crc kubenswrapper[4847]: E0218 00:26:26.404333 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:26 crc kubenswrapper[4847]: E0218 00:26:26.404395 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.404517 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:26 crc kubenswrapper[4847]: E0218 00:26:26.405747 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.430265 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.430309 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.430326 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.430346 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.430362 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.533482 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.533531 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.533546 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.533563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.533574 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.636491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.636525 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.636538 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.636553 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.636565 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.739980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.740028 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.740046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.740070 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.740089 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.843296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.843346 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.843364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.843391 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.843410 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.946269 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.946336 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.946359 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.946391 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:26 crc kubenswrapper[4847]: I0218 00:26:26.946415 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:26Z","lastTransitionTime":"2026-02-18T00:26:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.049660 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.049698 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.049713 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.049735 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.049751 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.151989 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.152037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.152049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.152068 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.152080 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.254367 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.254412 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.254424 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.254445 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.254457 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.357588 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.358049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.358211 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.358363 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.358508 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.381114 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:03:00.221787142 +0000 UTC Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.403705 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:27 crc kubenswrapper[4847]: E0218 00:26:27.403905 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.428393 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.460781 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.461299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.461336 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.461351 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.461373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.461393 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.481400 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.500048 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.523068 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.540495 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.555258 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.563643 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.563678 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.563690 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.563713 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.563725 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.573161 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.588202 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.609313 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.621532 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.640434 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.662481 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.666941 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.667013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.667032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.667057 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.667075 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.692689 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.708656 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.720996 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.737114 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.751047 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:27Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.772149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.772188 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.772196 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.772209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.772219 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.874679 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.874723 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.874734 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.874751 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.874765 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.977443 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.977482 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.977491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.977507 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:27 crc kubenswrapper[4847]: I0218 00:26:27.977517 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:27Z","lastTransitionTime":"2026-02-18T00:26:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.080592 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.080697 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.080721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.080755 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.080780 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.183657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.183717 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.183729 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.183744 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.183754 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.287418 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.287477 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.287495 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.287517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.287536 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.381651 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:59:46.040942751 +0000 UTC Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.389840 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.389870 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.389881 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.389899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.389911 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.403551 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:28 crc kubenswrapper[4847]: E0218 00:26:28.403707 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.403776 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:28 crc kubenswrapper[4847]: E0218 00:26:28.403831 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.403887 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:28 crc kubenswrapper[4847]: E0218 00:26:28.403947 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.492090 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.492127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.492137 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.492152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.492163 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.595286 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.595330 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.595342 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.595360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.595375 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.698447 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.698741 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.698811 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.698879 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.698944 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.801837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.802392 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.802530 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.802708 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.802825 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.904949 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.905241 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.905360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.905519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:28 crc kubenswrapper[4847]: I0218 00:26:28.905698 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:28Z","lastTransitionTime":"2026-02-18T00:26:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.008266 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.008540 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.008694 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.008822 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.008927 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.111530 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.111592 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.111636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.111659 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.111677 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.214147 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.214193 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.214205 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.214221 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.214233 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.316885 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.316918 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.316926 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.316949 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.316959 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.382488 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:12:35.887930667 +0000 UTC Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.404296 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:29 crc kubenswrapper[4847]: E0218 00:26:29.404521 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.418700 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.418754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.418772 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.418795 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.418814 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.522240 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.522309 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.522331 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.522388 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.522417 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.625290 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.625323 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.625334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.625396 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.625407 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.728023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.728099 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.728124 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.728153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.728176 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.831261 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.831358 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.831386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.831416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.831433 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.934515 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.934548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.934557 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.934571 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:29 crc kubenswrapper[4847]: I0218 00:26:29.934582 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:29Z","lastTransitionTime":"2026-02-18T00:26:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.037881 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.037916 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.037925 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.037938 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.037948 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.139747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.139774 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.139785 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.139800 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.139810 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.243956 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.244006 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.244017 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.244034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.244052 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.347382 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.347431 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.347441 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.347456 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.347466 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.383082 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:40:09.98637843 +0000 UTC Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.403419 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.403466 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:30 crc kubenswrapper[4847]: E0218 00:26:30.403558 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.403429 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:30 crc kubenswrapper[4847]: E0218 00:26:30.403811 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:30 crc kubenswrapper[4847]: E0218 00:26:30.404022 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.450059 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.450103 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.450134 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.450152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.450164 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.553294 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.553355 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.553372 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.553397 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.553414 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.656293 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.656342 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.656361 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.656383 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.656400 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.758972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.759016 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.759029 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.759047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.759059 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.861253 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.861297 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.861307 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.861323 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.861333 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.964488 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.964552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.964576 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.964642 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:30 crc kubenswrapper[4847]: I0218 00:26:30.964667 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:30Z","lastTransitionTime":"2026-02-18T00:26:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.067740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.067804 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.067828 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.067859 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.067883 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.171488 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.171784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.171878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.171969 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.172064 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.275068 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.275338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.275431 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.275527 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.275622 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.378496 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.378562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.378583 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.378652 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.378676 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.383773 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:38:17.153569361 +0000 UTC Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.404304 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:31 crc kubenswrapper[4847]: E0218 00:26:31.404923 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.405398 4847 scope.go:117] "RemoveContainer" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" Feb 18 00:26:31 crc kubenswrapper[4847]: E0218 00:26:31.405872 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.480972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.481263 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.481536 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.481739 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.481964 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.583893 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.583928 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.583938 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.583953 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.583963 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.686079 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.686127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.686139 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.686158 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.686172 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.789570 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.789616 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.789626 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.789641 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.789653 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.891831 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.892136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.892369 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.892494 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.892590 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.995381 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.995427 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.995454 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.995483 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:31 crc kubenswrapper[4847]: I0218 00:26:31.995491 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:31Z","lastTransitionTime":"2026-02-18T00:26:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.098161 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.098346 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.098412 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.098531 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.098567 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.202200 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.202242 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.202252 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.202267 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.202277 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.304204 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.304270 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.304290 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.304330 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.304481 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.384119 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:29:56.051070762 +0000 UTC Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.403510 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.403571 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:32 crc kubenswrapper[4847]: E0218 00:26:32.403683 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.403834 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:32 crc kubenswrapper[4847]: E0218 00:26:32.403886 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:32 crc kubenswrapper[4847]: E0218 00:26:32.404066 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.406631 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.406722 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.406784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.406857 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.406953 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.509109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.509169 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.509180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.509202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.509213 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.611497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.611742 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.611813 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.611877 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.611935 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.714460 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.714526 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.714539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.714557 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.714569 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.817945 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.818024 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.818048 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.818116 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.818146 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.920591 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.920682 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.920699 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.920725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:32 crc kubenswrapper[4847]: I0218 00:26:32.920742 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:32Z","lastTransitionTime":"2026-02-18T00:26:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.023256 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.023299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.023311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.023328 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.023337 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.126585 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.126673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.126701 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.126727 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.126745 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.230521 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.230661 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.230692 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.230732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.230756 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.335752 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.335841 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.335892 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.335913 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.335930 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.385258 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:11:31.476732086 +0000 UTC Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.408056 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:33 crc kubenswrapper[4847]: E0218 00:26:33.408230 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.443867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.444119 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.444224 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.444327 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.444414 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.548334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.548582 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.548699 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.548778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.548842 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.652373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.652884 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.652985 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.653052 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.653124 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.756877 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.756944 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.756956 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.756974 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.756986 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.859386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.859850 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.860030 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.860174 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.860307 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.964331 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.964378 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.964390 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.964413 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:33 crc kubenswrapper[4847]: I0218 00:26:33.964426 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:33Z","lastTransitionTime":"2026-02-18T00:26:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.068181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.068244 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.068262 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.068292 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.068312 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.172656 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.174105 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.174445 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.174703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.174852 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.278218 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.278284 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.278297 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.278322 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.278338 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.380991 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.381407 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.381650 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.381808 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.381927 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.385747 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:21:09.355721359 +0000 UTC Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.404197 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.404377 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.404652 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.404679 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.404980 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.404375 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.485022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.485400 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.485563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.485740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.485877 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.582416 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.582910 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.583050 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:27:06.583021305 +0000 UTC m=+99.960372287 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.589545 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.589975 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.590114 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.590266 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.590426 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.678235 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.678471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.678541 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.678629 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.678735 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.690222 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.693465 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.693544 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.693579 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.693617 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.693630 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.705277 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.708837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.708957 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.709037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.709132 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.709201 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.718819 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.721417 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.721460 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.721474 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.721489 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.721504 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.731128 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.734150 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.734230 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.734253 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.734283 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.734312 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.751850 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:34Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:34 crc kubenswrapper[4847]: E0218 00:26:34.752195 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.753747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.753785 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.753794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.753810 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.753819 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.855819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.855866 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.855876 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.855890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.855899 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.958121 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.958172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.958190 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.958212 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:34 crc kubenswrapper[4847]: I0218 00:26:34.958227 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:34Z","lastTransitionTime":"2026-02-18T00:26:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.060927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.061143 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.061229 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.061327 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.061388 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.164250 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.164286 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.164295 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.164311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.164320 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.266230 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.266266 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.266276 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.266291 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.266300 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.369182 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.369256 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.369282 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.369313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.369335 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.386733 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:12:02.819117493 +0000 UTC Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.403502 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:35 crc kubenswrapper[4847]: E0218 00:26:35.403774 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.472231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.472296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.472313 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.472333 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.472348 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.575032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.575252 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.575320 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.575405 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.575493 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.677282 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.677503 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.677572 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.677687 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.677751 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.780212 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.780619 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.780711 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.780802 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.780880 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.854392 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/0.log" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.854445 4847 generic.go:334] "Generic (PLEG): container finished" podID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" containerID="61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6" exitCode=1 Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.854473 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerDied","Data":"61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.854787 4847 scope.go:117] "RemoveContainer" containerID="61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.865667 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.877099 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.885359 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.886310 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.886339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.886348 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.886368 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.886378 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.898584 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.915996 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.929865 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.941684 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.953150 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.963401 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.971115 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.981735 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.988928 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.988980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.988995 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.989014 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.989026 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:35Z","lastTransitionTime":"2026-02-18T00:26:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:35 crc kubenswrapper[4847]: I0218 00:26:35.990778 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:35Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.003883 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.014405 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.024350 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.036096 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.053983 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.068852 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.091946 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.092012 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.092022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.092039 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.092065 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.194507 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.194554 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.194566 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.194586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.194620 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.297180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.297231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.297242 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.297258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.297270 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.387084 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:44:22.510607741 +0000 UTC Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.400164 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.400260 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.400283 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.400309 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.400326 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.403587 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.403675 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:36 crc kubenswrapper[4847]: E0218 00:26:36.403809 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.403713 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:36 crc kubenswrapper[4847]: E0218 00:26:36.404037 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:36 crc kubenswrapper[4847]: E0218 00:26:36.404177 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.503004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.503081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.503106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.503135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.503157 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.606061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.606126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.606142 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.606169 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.606187 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.709032 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.709093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.709111 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.709136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.709154 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.811663 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.811696 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.811705 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.811718 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.811728 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.859810 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/0.log" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.859859 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerStarted","Data":"f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.877054 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.892023 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.907874 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.913632 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.913680 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.913697 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.913723 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.913741 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:36Z","lastTransitionTime":"2026-02-18T00:26:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.923707 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.944929 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.958131 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.972964 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:36 crc kubenswrapper[4847]: I0218 00:26:36.989175 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:36Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.003249 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.015799 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.016251 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.016286 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.016295 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.016309 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.016318 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.040827 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.054986 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.065915 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.076342 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.085141 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.093283 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.104883 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.114404 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.118301 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.118333 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.118342 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.118356 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.118365 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.221016 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.221096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.221105 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.221118 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.221128 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.324096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.324137 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.324149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.324167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.324339 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.387478 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:50:28.333383361 +0000 UTC Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.406245 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:37 crc kubenswrapper[4847]: E0218 00:26:37.406384 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.419407 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.428121 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.428155 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.428166 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.428182 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.428193 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.434702 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.447454 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.458383 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.471399 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.500532 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.519289 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.529346 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.529373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.529384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.529399 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.529412 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.531398 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.542907 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.557493 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.574312 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.593479 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.603830 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.617252 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.628732 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.631976 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.632018 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.632031 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.632046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.632058 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.642641 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.653017 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.662344 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:37Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.733552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.733571 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.733579 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.733591 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.733619 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.835144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.835171 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.835184 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.835195 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.835203 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.936942 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.937004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.937021 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.937046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:37 crc kubenswrapper[4847]: I0218 00:26:37.937068 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:37Z","lastTransitionTime":"2026-02-18T00:26:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.039721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.039811 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.039828 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.039850 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.039869 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.142041 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.142072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.142083 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.142098 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.142108 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.243818 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.243849 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.243860 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.243875 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.243887 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.346160 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.346189 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.346197 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.346208 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.346217 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.388588 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:56:21.262747293 +0000 UTC Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.404081 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.404114 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:38 crc kubenswrapper[4847]: E0218 00:26:38.404201 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.404244 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:38 crc kubenswrapper[4847]: E0218 00:26:38.404338 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:38 crc kubenswrapper[4847]: E0218 00:26:38.404676 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.418719 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.448095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.448132 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.448143 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.448157 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.448168 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.550201 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.550240 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.550248 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.550261 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.550270 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.651816 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.651852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.651863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.651878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.651890 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.755294 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.755336 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.755345 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.755361 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.755370 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.859145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.859183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.859194 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.859213 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.859223 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.963016 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.963063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.963078 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.963103 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:38 crc kubenswrapper[4847]: I0218 00:26:38.963117 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:38Z","lastTransitionTime":"2026-02-18T00:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.066473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.066527 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.066545 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.066568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.066585 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.169021 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.169072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.169082 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.169097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.169106 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.272145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.272185 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.272196 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.272213 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.272225 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.374411 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.374453 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.374463 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.374477 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.374486 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.388835 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:41:56.86143213 +0000 UTC Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.403217 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:39 crc kubenswrapper[4847]: E0218 00:26:39.403365 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.476991 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.477043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.477060 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.477082 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.477098 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.578863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.578912 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.578923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.578935 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.578946 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.681660 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.681698 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.681706 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.681719 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.681728 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.783694 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.783733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.783743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.783757 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.783769 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.885497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.885537 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.885546 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.885560 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.885569 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.988892 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.988930 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.988940 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.988954 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:39 crc kubenswrapper[4847]: I0218 00:26:39.988963 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:39Z","lastTransitionTime":"2026-02-18T00:26:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.091850 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.091905 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.091920 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.091937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.091948 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.195223 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.195270 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.195281 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.195295 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.195304 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.298145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.298193 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.298202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.298242 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.298252 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.389307 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:10:22.592702546 +0000 UTC Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.401034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.401072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.401080 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.401093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.401102 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.403315 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.403339 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:40 crc kubenswrapper[4847]: E0218 00:26:40.403432 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.403310 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:40 crc kubenswrapper[4847]: E0218 00:26:40.403595 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:40 crc kubenswrapper[4847]: E0218 00:26:40.403693 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.503869 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.503925 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.503940 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.503960 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.503973 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.606172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.606204 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.606214 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.606227 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.606236 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.709406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.709456 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.709467 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.709520 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.709533 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.812436 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.812498 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.812518 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.812547 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.812568 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.914715 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.914801 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.914827 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.914859 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:40 crc kubenswrapper[4847]: I0218 00:26:40.914884 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:40Z","lastTransitionTime":"2026-02-18T00:26:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.017341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.017415 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.017443 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.017471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.017493 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.121097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.121155 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.121165 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.121183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.121199 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.223206 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.223237 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.223245 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.223257 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.223266 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.326371 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.326455 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.326466 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.326481 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.326492 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.390390 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:40:43.247100036 +0000 UTC Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.403908 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:41 crc kubenswrapper[4847]: E0218 00:26:41.404138 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.429050 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.429100 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.429110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.429126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.429136 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.532417 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.532505 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.532529 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.532561 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.532583 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.635637 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.635675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.635684 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.635701 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.635710 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.738088 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.738167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.738184 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.738214 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.738231 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.841570 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.841689 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.841733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.841762 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.841782 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.945172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.945355 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.945418 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.945450 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:41 crc kubenswrapper[4847]: I0218 00:26:41.945471 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:41Z","lastTransitionTime":"2026-02-18T00:26:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.047751 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.047844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.047864 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.047964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.048011 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.151923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.151968 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.151985 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.152008 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.152025 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.256027 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.256108 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.256131 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.256159 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.256182 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.360115 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.360201 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.360221 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.360253 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.360281 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.390946 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:22:33.63147499 +0000 UTC Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.403790 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.403917 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:42 crc kubenswrapper[4847]: E0218 00:26:42.404014 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.403925 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:42 crc kubenswrapper[4847]: E0218 00:26:42.404167 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:42 crc kubenswrapper[4847]: E0218 00:26:42.404347 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.464083 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.464158 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.464180 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.464264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.464287 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.566720 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.566785 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.566808 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.566838 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.566862 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.670842 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.670924 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.670961 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.670993 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.671020 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.774018 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.774070 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.774080 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.774097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.774109 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.877725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.877787 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.877805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.877830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.877849 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.981224 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.981296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.981314 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.981338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:42 crc kubenswrapper[4847]: I0218 00:26:42.981358 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:42Z","lastTransitionTime":"2026-02-18T00:26:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.084247 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.084306 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.084318 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.084335 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.084347 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.187348 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.187416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.187433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.187458 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.187474 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.291483 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.291534 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.291552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.291657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.291686 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.391301 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:49:36.58752381 +0000 UTC Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.394729 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.394796 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.394814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.394836 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.394852 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.404158 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:43 crc kubenswrapper[4847]: E0218 00:26:43.404315 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.498665 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.498758 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.498778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.498812 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.498836 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.603753 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.603834 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.603847 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.603871 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.603887 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.707766 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.707846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.707865 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.707895 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.707914 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.811012 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.811104 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.811130 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.811181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.811205 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.919670 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.919777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.919805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.919835 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:43 crc kubenswrapper[4847]: I0218 00:26:43.919857 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:43Z","lastTransitionTime":"2026-02-18T00:26:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.022948 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.023073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.023110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.023188 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.023249 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.126096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.126144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.126153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.126171 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.126184 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.229973 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.230031 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.230049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.230074 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.230092 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.333430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.333492 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.333503 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.333524 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.333537 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.392348 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:50:23.547591937 +0000 UTC Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.403785 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.403835 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.403798 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:44 crc kubenswrapper[4847]: E0218 00:26:44.404004 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:44 crc kubenswrapper[4847]: E0218 00:26:44.404196 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:44 crc kubenswrapper[4847]: E0218 00:26:44.404294 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.436433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.436470 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.436485 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.436503 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.436517 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.539311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.539395 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.539421 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.539450 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.539473 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.642342 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.642416 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.642434 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.642459 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.642476 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.744844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.744906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.744917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.744933 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.744943 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.847680 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.847748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.847760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.847781 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.847794 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.951369 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.951448 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.951466 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.951494 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:44 crc kubenswrapper[4847]: I0218 00:26:44.951516 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:44Z","lastTransitionTime":"2026-02-18T00:26:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.053733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.053779 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.053791 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.053805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.053816 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.099405 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.099465 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.099478 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.099504 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.099590 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.123462 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.128890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.128949 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.128962 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.128980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.128992 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.148461 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.154106 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.154141 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.154153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.154172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.154187 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.175092 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.181382 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.181433 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.181455 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.181486 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.181512 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.225115 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.230423 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.230473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.230484 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.230504 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.230517 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.270466 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:45Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.270722 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.272927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.272984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.272997 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.273011 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.273023 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.375397 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.375473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.375484 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.375502 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.375539 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.393285 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:10:56.038884595 +0000 UTC Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.404052 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:45 crc kubenswrapper[4847]: E0218 00:26:45.404258 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.479165 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.479255 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.479274 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.479303 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.479325 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.583786 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.583852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.583872 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.583897 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.583916 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.687721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.687787 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.687806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.687834 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.687856 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.811292 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.811370 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.811395 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.811465 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.811483 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.914743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.914834 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.914855 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.914879 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:45 crc kubenswrapper[4847]: I0218 00:26:45.914896 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:45Z","lastTransitionTime":"2026-02-18T00:26:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.018317 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.018400 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.018418 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.018446 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.018467 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.122111 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.122197 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.122217 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.122246 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.122267 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.225963 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.226039 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.226061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.226095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.226115 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.330216 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.330312 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.330335 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.330360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.330380 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.394519 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:39:20.923146686 +0000 UTC Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.403257 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.403357 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.403384 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:46 crc kubenswrapper[4847]: E0218 00:26:46.403529 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:46 crc kubenswrapper[4847]: E0218 00:26:46.403686 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:46 crc kubenswrapper[4847]: E0218 00:26:46.403836 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.405470 4847 scope.go:117] "RemoveContainer" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.434863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.434964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.434983 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.435008 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.435029 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.538381 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.538461 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.538481 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.538511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.538530 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.641965 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.642002 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.642011 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.642025 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.642035 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.745304 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.745396 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.745430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.745470 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.745498 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.848953 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.849043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.849065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.849099 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.849124 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.899857 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/2.log" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.902271 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.904169 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.927037 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.945288 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.951984 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.952050 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.952066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.952093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.952110 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:46Z","lastTransitionTime":"2026-02-18T00:26:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.964650 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:46 crc kubenswrapper[4847]: I0218 00:26:46.982393 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c53fc45-b294-400f-a98d-2f841be55fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:46Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.006837 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.037436 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.054040 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.054093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.054118 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.054145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.054164 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.056415 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.075951 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.090364 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.106989 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.120930 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.131821 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.144511 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.156884 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.156927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.156937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.156954 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.156966 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.172641 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.191128 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.208053 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.228123 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.248052 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.258083 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.259343 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.259376 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.259388 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.259404 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.259415 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.362425 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.362501 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.362519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.362544 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.362562 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.395063 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:43:07.397792059 +0000 UTC Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.403661 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:47 crc kubenswrapper[4847]: E0218 00:26:47.403911 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.425398 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.437863 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.457957 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.464768 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.464826 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.464845 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.464873 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.464892 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.471148 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c53fc45-b294-400f-a98d-2f841be55fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.493633 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.510412 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.532295 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.564013 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.569071 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.569128 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.569139 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.569163 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.569177 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.579460 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.593042 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.609991 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.626185 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.646334 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.672902 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.672979 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.673000 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.673028 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.673046 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.679758 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.699374 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.720586 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.740570 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.757590 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.772148 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.776972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.777039 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.777055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.777081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.777099 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.880447 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.880514 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.880532 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.880556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.880573 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.909489 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/3.log" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.910560 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/2.log" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.914469 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" exitCode=1 Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.914537 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.914657 4847 scope.go:117] "RemoveContainer" containerID="3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.916292 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:26:47 crc kubenswrapper[4847]: E0218 00:26:47.917316 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.945853 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.968914 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.991859 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:47Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.994435 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.994501 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.994524 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.994553 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:47 crc kubenswrapper[4847]: I0218 00:26:47.994573 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:47Z","lastTransitionTime":"2026-02-18T00:26:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.014890 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.029999 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.047830 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.063013 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.080657 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.099476 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.099521 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.099536 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.099558 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.099577 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.103501 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.121061 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c53fc45-b294-400f-a98d-2f841be55fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.135921 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.157731 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.182001 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.204380 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.204496 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.204559 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.204730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.204821 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.206442 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ff9e0568e7a924f35f54f2f1627a4bac57e1564c641c409b6d8176a0280f2c4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:15Z\\\",\\\"message\\\":\\\"go:140\\\\nI0218 00:26:15.334486 6481 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334516 6481 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0218 00:26:15.334583 6481 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0218 00:26:15.334507 6481 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334648 6481 factory.go:656] Stopping watch factory\\\\nI0218 00:26:15.334650 6481 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0218 00:26:15.334597 6481 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0218 00:26:15.334666 6481 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0218 00:26:15.334705 6481 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0218 00:26:15.334818 6481 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:47Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:26:47.470147 6889 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-wfg4t after 0 failed attempt(s)\\\\nI0218 00:26:47.470156 6889 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-wfg4t\\\\nI0218 00:26:47.469488 6889 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0218 00:26:47.470185 6889 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.082489ms\\\\nI0218 00:26:47.470207 6889 services_controller.go:356] Processing sync for service openshift-machine-api/control-plane-machine-set-operator for network=default\\\\nF0218 00:26:47.470225 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node netw\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.223368 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.245797 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.266438 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.283432 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.301850 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.308548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.308684 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.308760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.308792 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.308849 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.395870 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:15:27.689266844 +0000 UTC Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.404303 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.404390 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.404489 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:48 crc kubenswrapper[4847]: E0218 00:26:48.404772 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:48 crc kubenswrapper[4847]: E0218 00:26:48.404905 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:48 crc kubenswrapper[4847]: E0218 00:26:48.405222 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.413471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.413536 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.413558 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.413583 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.413629 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.517662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.517722 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.517740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.517769 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.517788 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.621181 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.621258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.621280 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.621311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.621336 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.724803 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.724910 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.724931 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.724959 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.724979 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.830656 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.830716 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.830726 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.830746 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.830757 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.924294 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/3.log" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.937988 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.938840 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.939013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.939095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.939184 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:48Z","lastTransitionTime":"2026-02-18T00:26:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.943139 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:26:48 crc kubenswrapper[4847]: E0218 00:26:48.943890 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:26:48 crc kubenswrapper[4847]: I0218 00:26:48.980544 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:47Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:26:47.470147 6889 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-wfg4t after 0 failed attempt(s)\\\\nI0218 00:26:47.470156 6889 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-wfg4t\\\\nI0218 00:26:47.469488 6889 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0218 00:26:47.470185 6889 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.082489ms\\\\nI0218 00:26:47.470207 6889 services_controller.go:356] Processing sync for service openshift-machine-api/control-plane-machine-set-operator for network=default\\\\nF0218 00:26:47.470225 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node netw\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.001633 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:48Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.024678 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.040862 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c53fc45-b294-400f-a98d-2f841be55fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.045937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.046172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.046261 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.046339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.046408 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.062072 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.081919 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.100862 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.120239 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.138390 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.149311 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.149373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.149387 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.149406 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.149417 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.155105 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.174269 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.192013 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.212192 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.240017 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.251966 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.252014 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.252023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.252040 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.252050 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.259168 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.280620 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.297787 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.314082 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.330247 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:49Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.354552 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.354875 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.354980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.355241 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.355432 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.396492 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:07:23.602992976 +0000 UTC Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.403856 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:49 crc kubenswrapper[4847]: E0218 00:26:49.404098 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.458758 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.458830 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.458852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.458880 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.458900 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.561946 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.562225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.562287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.562362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.562419 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.664886 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.664975 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.665005 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.665035 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.665055 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.768700 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.768991 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.769070 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.769152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.769235 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.873630 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.873705 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.873729 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.873764 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.873789 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.978053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.978142 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.978168 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.978202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:49 crc kubenswrapper[4847]: I0218 00:26:49.978225 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:49Z","lastTransitionTime":"2026-02-18T00:26:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.081452 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.081518 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.081539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.081566 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.081592 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.187404 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.187478 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.187504 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.187535 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.187557 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.268542 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.268832 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.26877711 +0000 UTC m=+147.646128192 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.291225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.291374 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.291398 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.291430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.291455 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.370539 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.370657 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.370758 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.370797 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.370848 4847 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.370956 4847 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.370973 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.370941756 +0000 UTC m=+147.748292738 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371001 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371063 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371065 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371093 4847 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371109 4847 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371133 4847 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371030 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.371008477 +0000 UTC m=+147.748359459 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371194 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.371162361 +0000 UTC m=+147.748513343 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.371230 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.371212302 +0000 UTC m=+147.748563274 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.394507 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.394631 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.394652 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.394682 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.394704 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.397655 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:47:56.572976433 +0000 UTC Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.403292 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.403356 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.403395 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.403498 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.403719 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:50 crc kubenswrapper[4847]: E0218 00:26:50.403840 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.498369 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.498452 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.498480 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.498517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.498540 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.603580 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.603714 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.603744 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.603790 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.603831 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.710723 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.710774 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.710783 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.710798 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.710808 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.813683 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.813736 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.813754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.813777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.813794 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.916941 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.917004 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.917023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.917053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:50 crc kubenswrapper[4847]: I0218 00:26:50.917076 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:50Z","lastTransitionTime":"2026-02-18T00:26:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.019934 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.019998 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.020015 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.020036 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.020050 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.124094 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.124169 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.124196 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.124223 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.124243 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.227519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.227927 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.227993 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.228068 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.228131 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.330448 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.330515 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.330533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.330559 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.330577 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.397821 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 19:15:24.799411454 +0000 UTC Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.404235 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:51 crc kubenswrapper[4847]: E0218 00:26:51.404388 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.432697 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.433096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.433211 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.433328 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.433456 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.536276 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.537128 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.537436 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.537739 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.537936 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.641999 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.642049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.642061 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.642077 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.642088 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.745793 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.745879 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.745902 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.745935 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.745962 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.849780 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.849867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.849892 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.849925 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.849953 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.952657 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.952937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.953015 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.953076 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:51 crc kubenswrapper[4847]: I0218 00:26:51.953135 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:51Z","lastTransitionTime":"2026-02-18T00:26:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.055873 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.055972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.055990 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.056016 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.056033 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.160075 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.160329 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.160409 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.160471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.160524 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.264825 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.264887 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.264906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.264928 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.264947 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.367425 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.367523 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.367562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.367592 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.367650 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.398564 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:07:32.998528237 +0000 UTC Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.403964 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.404146 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.404317 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:52 crc kubenswrapper[4847]: E0218 00:26:52.404458 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:52 crc kubenswrapper[4847]: E0218 00:26:52.404727 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:52 crc kubenswrapper[4847]: E0218 00:26:52.404856 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.471320 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.471401 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.471423 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.471450 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.471469 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.574662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.574741 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.574760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.574790 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.574811 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.679088 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.679156 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.679176 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.679201 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.679221 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.783324 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.783388 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.783414 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.783441 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.783456 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.886771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.886831 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.886848 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.886874 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.886891 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.990798 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.990934 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.990954 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.990982 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:52 crc kubenswrapper[4847]: I0218 00:26:52.991000 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:52Z","lastTransitionTime":"2026-02-18T00:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.094541 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.094957 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.095023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.095070 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.095100 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.200072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.200138 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.200155 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.200179 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.200198 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.303909 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.304334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.304399 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.304491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.304556 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.399311 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:16:49.448526672 +0000 UTC Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.403803 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:53 crc kubenswrapper[4847]: E0218 00:26:53.403937 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.413938 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.414002 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.414018 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.414039 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.414053 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.516255 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.516315 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.516325 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.516339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.516349 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.619969 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.620025 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.620035 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.620053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.620064 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.722305 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.722340 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.722350 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.722362 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.722372 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.825332 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.825371 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.825380 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.825395 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.825406 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.927697 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.927730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.927740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.927754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:53 crc kubenswrapper[4847]: I0218 00:26:53.927765 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:53Z","lastTransitionTime":"2026-02-18T00:26:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.030296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.030341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.030353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.030369 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.030382 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.133684 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.133763 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.133784 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.133811 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.133834 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.237435 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.237487 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.237506 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.237534 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.237554 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.341100 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.341161 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.341178 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.341205 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.341223 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.400115 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 11:29:56.452686815 +0000 UTC Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.403442 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.403442 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.403919 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:54 crc kubenswrapper[4847]: E0218 00:26:54.404076 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:54 crc kubenswrapper[4847]: E0218 00:26:54.404184 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:54 crc kubenswrapper[4847]: E0218 00:26:54.404548 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.444482 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.444533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.444551 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.444574 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.444592 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.547899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.547991 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.548013 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.548047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.548072 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.651811 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.651912 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.651925 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.651941 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.651953 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.754721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.754789 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.754814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.754844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.754868 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.857597 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.857708 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.857725 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.857750 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.857889 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.960744 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.960794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.960807 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.960825 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:54 crc kubenswrapper[4847]: I0218 00:26:54.960842 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:54Z","lastTransitionTime":"2026-02-18T00:26:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.064300 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.064380 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.064405 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.064435 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.064457 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.167740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.167802 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.167819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.167841 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.167858 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.271340 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.271854 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.272072 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.272243 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.272696 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.377348 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.377893 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.378110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.378322 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.378523 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.400346 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:44:15.183343411 +0000 UTC Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.403867 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.404314 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.483351 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.483436 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.483454 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.483480 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.483501 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.587822 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.587910 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.587934 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.587964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.587988 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.633535 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.633649 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.633668 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.633692 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.633711 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.665264 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.670993 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.671037 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.671049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.671065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.671075 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.690713 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.697277 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.697663 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.697854 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.698027 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.698187 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.719247 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.725788 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.725874 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.725891 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.725913 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.725926 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.744788 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.751951 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.752015 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.752036 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.752065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.752087 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.773418 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:55Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:55 crc kubenswrapper[4847]: E0218 00:26:55.773577 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.776282 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.776325 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.776341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.776363 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.776377 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.880144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.880215 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.880235 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.880264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.880284 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.982974 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.983057 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.983078 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.983111 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:55 crc kubenswrapper[4847]: I0218 00:26:55.983134 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:55Z","lastTransitionTime":"2026-02-18T00:26:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.087777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.087844 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.087858 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.087880 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.087891 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.190733 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.190847 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.190872 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.190908 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.190935 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.293973 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.294023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.294036 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.294053 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.294065 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.397057 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.397136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.397160 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.397193 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.397215 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.402380 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:00:44.869228297 +0000 UTC Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.403747 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:56 crc kubenswrapper[4847]: E0218 00:26:56.403898 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.403748 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:56 crc kubenswrapper[4847]: E0218 00:26:56.404094 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.404209 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:56 crc kubenswrapper[4847]: E0218 00:26:56.404325 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.500341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.500392 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.500407 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.500428 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.500443 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.603794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.603837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.603847 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.603866 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.603883 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.707374 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.707420 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.707432 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.707449 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.707464 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.810829 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.810899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.810918 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.810945 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.810965 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.914468 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.914539 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.914556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.914578 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:56 crc kubenswrapper[4847]: I0218 00:26:56.914592 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:56Z","lastTransitionTime":"2026-02-18T00:26:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.018933 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.018982 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.018992 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.019006 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.019015 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.121651 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.121703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.121714 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.121732 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.121743 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.224323 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.224367 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.224410 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.224430 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.224460 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.327271 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.327309 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.327317 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.327329 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.327339 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.403689 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.403817 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:06:52.355662958 +0000 UTC Feb 18 00:26:57 crc kubenswrapper[4847]: E0218 00:26:57.405499 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.425737 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff63f7c0-7517-44c0-a9d2-dac39aa374ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7093efe7a91f141a0eb9226115d13254da687dd479d70d9fd0736ab942f377d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26b1f245e290d81692c7b3ed3f65742fef2a03f29079ca4f8c108879a4c97b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc5f92485d5aa0367c4e57cc6d0e1290f2fc5895346260d7a3c809f1c2dcf311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95dfa3ef5dc69520156103624de4469e078651072d23753e7f9ae1f3ec145236\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.431457 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.431544 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.431665 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.431698 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.431717 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.445476 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://edaf448b569dbf635913c1fa724e49820161ff0b4eb15ef0899bb73a836d2e94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.462169 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d9clg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b10e15ef-4ac4-4ad4-9b20-e005f4b3d484\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://287258b722259e18011ffa677cb025fe3fa956ebf528ecd81dd50981df6fe793\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d9clg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.479190 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4w5fp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1185a103-f769-4668-9fe0-099078aeb848\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e25af096e4e027609383692671b456be21abb9c63c2d504bf20ce0ca37124e76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptjlr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4w5fp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.500538 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3c594d1-98da-4829-b039-f8ea9ce2ba23\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61566d2c384f0b4edf30798aa4d308949c79577656566b609308763aba239e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59e1cb9f1c4e0e50b05bb7d3443832778b75a877b34f638942d8befee592fa89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://784f638cf363b19f86959f540a2e594569af9b335d94f2d40d453ff89ac79340\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534564 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534681 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534709 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534727 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.534802 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75d84ea2-94dd-46dd-94de-a69888a3e5f5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0b4b1ef0cb0b9fab58204e42a95858ac4228133834b0b9ae986a7199f7b0579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56069de95b7f8dcec90f20ae6e3de0347c3d223174570d94b0cb76a137c3e300\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b415307f0e0b9ddff698bcdb9d2909f36e2ef1e7c50f8f741213df0c1dd6a99a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1903a4900a42c5c7a1b7a75cb9b0cebd349c56fa0652b56dfe8d56d0bc6f7ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a7be260857951ddd24ddf9be6683673be1b2af63a1cb354019e6dafc3817103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ffa442a37e53f03c0b164a5d43da58084c53eae124eac1e0733403a5f38c7164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4aa2a0489d219a69fb8e83b4603481388e518270b960cde0349cfba72079bc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cdc98c4bc64d69c511e71ed7a1039b7d48c3a7717c870844e1a671c5e2a75d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.560493 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60fb560c163b382c61f21c408c93f68d06cecc5f15380d9f1119a22b68fd7df4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.583893 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.606106 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.625262 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27709e7003db9f7e4abcf85dad33a139fffa50dec94c3e20d29c41a8fb07061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g8h9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xsj47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.640290 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.640331 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.640649 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.640675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.640885 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.643399 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wprf4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2eb9a65-88b5-49d1-885a-98c60c1283b4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:35Z\\\",\\\"message\\\":\\\"2026-02-18T00:25:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627\\\\n2026-02-18T00:25:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5f609a48-dc45-4f0c-82f6-0c8971f2d627 to /host/opt/cni/bin/\\\\n2026-02-18T00:25:50Z [verbose] multus-daemon started\\\\n2026-02-18T00:25:50Z [verbose] Readiness Indicator file check\\\\n2026-02-18T00:26:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zp6tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wprf4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.662114 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69d302fa-7d6b-4e4b-9dfe-71ed7d60b342\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448c6cccc0c45f6e1da713ddeb37c39ed13a70a3cab0dc7962d39af8f6d97599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://38a216f41297e68f3ea5eed1a47484c8e9e6cacfd432e7f8f33624d4e6277cd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:26:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h8l66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vk8bl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.680587 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5rg76" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a7318b6-f24d-4785-bd56-ad5ecec493da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svnct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:26:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5rg76\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.704748 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08262fa8-b3b6-49f5-b5cd-d9d81dddb06e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:26:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1771374341\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1771374340\\\\\\\\\\\\\\\" (2026-02-17 23:25:40 +0000 UTC to 2027-02-17 23:25:40 +0000 UTC (now=2026-02-18 00:25:46.250684512 +0000 UTC))\\\\\\\"\\\\nI0218 00:25:46.250725 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0218 00:25:46.250748 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0218 00:25:46.250773 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250793 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0218 00:25:46.250840 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250849 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0218 00:25:46.250852 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1999904905/tls.crt::/tmp/serving-cert-1999904905/tls.key\\\\\\\"\\\\nI0218 00:25:46.250869 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0218 00:25:46.250896 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0218 00:25:46.250909 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0218 00:25:46.250914 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0218 00:25:46.250919 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0218 00:25:46.251162 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.723254 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c53fc45-b294-400f-a98d-2f841be55fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308b3a78b840de16fff8a1c7ae5a9255a966eca81a3a0cb9e36a6899819fab9c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c6747f87a275220d0b0a5e243d5dea9341223b8c05e02c2d037a1958f903a33\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.744183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.744240 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.744258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.744285 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.744305 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.745060 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.768692 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3578a2a8ad43a8349151248de09b28eecd5f7a98d640f2645638f5f3d4eddd7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://92916cbecefe041920c2c2363215172e5e101a47d7b80e3fbd5ed109f2e97b2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.793141 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"94a6901a-92ec-4fd6-8ee3-ff3e6971c003\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408b7ea2cda4614c4b88e0bd35d9a38124f813723fd03d05b93c173d3c773773\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64d6c9e85fa2824d725e09c10fee85055af57ad12f33386f9380e2d9d53a1153\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50a24c4b79276a8dee88c939255588fd25e5f6761b6bccf7b523049b38df8e86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ff6281f506707e4b5f02b6fd174c8f1ac63d22b2a8bde02379a16956ba1891e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://597a4a037494248f818c12f1dc4a2a3550dfe9f02f5b4ba5776be1be06c8d9bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a356771c0a9f001df342ff833136fb5589bad69bce3bdcb004258b5c061169ea\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0e6755636ddb214554fc853ccceffa44db9b8461e292e9bac9271e2ae027131\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6ld5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wfg4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.828401 4847 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86e5946b-870b-46f1-8923-4a8abd64da45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-18T00:26:47Z\\\",\\\"message\\\":\\\" server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0218 00:26:47.470147 6889 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-wfg4t after 0 failed attempt(s)\\\\nI0218 00:26:47.470156 6889 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-wfg4t\\\\nI0218 00:26:47.469488 6889 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}\\\\nI0218 00:26:47.470185 6889 services_controller.go:360] Finished syncing service machine-api-operator-webhook on namespace openshift-machine-api for network=default : 4.082489ms\\\\nI0218 00:26:47.470207 6889 services_controller.go:356] Processing sync for service openshift-machine-api/control-plane-machine-set-operator for network=default\\\\nF0218 00:26:47.470225 6889 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node netw\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:26:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:25:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:25:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:25:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fjwgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:25:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bxm6w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:26:57Z is after 2025-08-24T17:21:41Z" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.848373 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.848511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.848578 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.848672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.848743 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.952562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.953043 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.953278 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.953572 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:57 crc kubenswrapper[4847]: I0218 00:26:57.953789 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:57Z","lastTransitionTime":"2026-02-18T00:26:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.056699 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.056760 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.056777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.056805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.056824 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.160009 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.160067 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.160083 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.160107 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.160127 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.263176 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.263231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.263248 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.263268 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.263284 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.366724 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.366795 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.366814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.366839 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.366857 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.403771 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.403957 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.403918 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:00:50.442492339 +0000 UTC Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.403985 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:26:58 crc kubenswrapper[4847]: E0218 00:26:58.405265 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:26:58 crc kubenswrapper[4847]: E0218 00:26:58.405375 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:26:58 crc kubenswrapper[4847]: E0218 00:26:58.404931 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.470476 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.470532 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.470547 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.470568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.470585 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.573916 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.574002 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.574022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.574049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.574067 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.677730 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.678254 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.678403 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.678560 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.678763 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.783368 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.784065 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.784089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.784118 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.784135 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.886591 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.886708 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.886727 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.886758 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.886778 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.990491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.990580 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.990636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.990670 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:58 crc kubenswrapper[4847]: I0218 00:26:58.990696 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:58Z","lastTransitionTime":"2026-02-18T00:26:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.093715 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.093778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.093800 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.093825 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.093843 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.197699 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.197777 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.197794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.197819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.197837 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.302170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.302245 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.302264 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.302294 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.302315 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.403833 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:26:59 crc kubenswrapper[4847]: E0218 00:26:59.403997 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405509 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:25:19.037940515 +0000 UTC Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405693 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405712 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.405728 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.509566 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.509658 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.509676 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.509703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.509721 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.613259 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.613840 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.614046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.614170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.614284 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.718555 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.719142 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.719283 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.719587 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.719781 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.823154 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.823748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.823906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.824054 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.824214 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.927306 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.927356 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.927369 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.927390 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:26:59 crc kubenswrapper[4847]: I0218 00:26:59.927405 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:26:59Z","lastTransitionTime":"2026-02-18T00:26:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.032042 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.032108 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.032127 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.032155 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.032177 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.135632 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.135778 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.135806 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.135833 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.135854 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.239097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.239146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.239154 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.239167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.239177 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.343867 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.343966 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.343985 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.344012 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.344030 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.404239 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.404270 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.404324 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:00 crc kubenswrapper[4847]: E0218 00:27:00.404439 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:00 crc kubenswrapper[4847]: E0218 00:27:00.404772 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:00 crc kubenswrapper[4847]: E0218 00:27:00.405196 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.405753 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:08:20.638252802 +0000 UTC Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.446920 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.447003 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.447051 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.447073 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.447086 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.550563 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.550661 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.550678 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.550700 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.550717 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.661901 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.661992 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.662014 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.662046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.662067 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.765338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.765408 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.765431 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.765460 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.765483 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.869216 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.869296 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.869322 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.869349 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.869369 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.973114 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.973192 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.973210 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.973250 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:00 crc kubenswrapper[4847]: I0218 00:27:00.973268 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:00Z","lastTransitionTime":"2026-02-18T00:27:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.076545 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.076633 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.076654 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.076675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.076693 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.179899 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.179956 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.179974 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.179998 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.180017 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.283425 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.283481 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.283497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.283521 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.283539 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.386898 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.386959 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.386980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.387007 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.387024 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.403386 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:01 crc kubenswrapper[4847]: E0218 00:27:01.403566 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.404820 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:27:01 crc kubenswrapper[4847]: E0218 00:27:01.405125 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.406583 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:45:54.533251076 +0000 UTC Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.490929 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.490994 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.491011 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.491035 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.491052 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.594081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.594146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.594164 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.594187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.594205 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.697885 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.697954 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.697976 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.698000 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.698021 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.800545 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.800641 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.800662 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.800688 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.800706 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.904008 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.904084 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.904109 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.904144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:01 crc kubenswrapper[4847]: I0218 00:27:01.904170 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:01Z","lastTransitionTime":"2026-02-18T00:27:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.008108 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.008186 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.008203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.008222 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.008238 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.111466 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.111522 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.111540 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.111565 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.111587 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.215393 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.215502 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.215528 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.215562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.215585 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.319906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.319958 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.319972 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.319991 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.320003 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.403951 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:02 crc kubenswrapper[4847]: E0218 00:27:02.404051 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.404393 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.404490 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:02 crc kubenswrapper[4847]: E0218 00:27:02.404544 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:02 crc kubenswrapper[4847]: E0218 00:27:02.404684 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.406945 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:21:22.298507464 +0000 UTC Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.422761 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.422799 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.422810 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.422824 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.422839 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.525864 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.525924 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.525945 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.525971 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.525989 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.629400 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.629458 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.629470 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.629488 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.629501 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.732194 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.732254 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.732270 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.732291 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.732304 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.834901 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.834948 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.834957 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.834976 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.834985 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.938355 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.938442 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.938473 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.938507 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:02 crc kubenswrapper[4847]: I0218 00:27:02.938528 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:02Z","lastTransitionTime":"2026-02-18T00:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.049384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.049481 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.049507 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.049541 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.049568 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.155327 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.155397 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.155415 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.155440 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.155459 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.258115 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.258184 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.258209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.258236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.258259 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.361568 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.361632 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.361641 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.361655 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.361664 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.403344 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:03 crc kubenswrapper[4847]: E0218 00:27:03.403667 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.407814 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 12:14:05.443797656 +0000 UTC Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.464987 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.465049 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.465068 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.465096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.465118 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.568675 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.568749 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.568766 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.568792 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.568812 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.671825 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.671890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.671908 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.671934 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.671955 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.775472 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.775655 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.775687 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.775717 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.775741 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.879837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.879917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.879942 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.879979 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.880003 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.983683 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.983771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.983808 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.983847 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:03 crc kubenswrapper[4847]: I0218 00:27:03.983872 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:03Z","lastTransitionTime":"2026-02-18T00:27:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.087722 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.087805 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.087826 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.087856 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.087882 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.192644 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.192717 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.192736 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.192763 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.192788 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.297126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.297216 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.297235 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.297263 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.297286 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.400809 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.400878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.400896 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.400925 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.400949 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.404196 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.404231 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:04 crc kubenswrapper[4847]: E0218 00:27:04.404396 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.404331 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:04 crc kubenswrapper[4847]: E0218 00:27:04.404509 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:04 crc kubenswrapper[4847]: E0218 00:27:04.404708 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.408201 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:51:35.184087478 +0000 UTC Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.504401 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.504444 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.504453 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.504468 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.504477 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.607774 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.607852 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.607878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.607913 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.607938 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.711389 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.711457 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.711480 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.711508 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.711526 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.814471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.814519 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.814531 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.814548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.814563 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.916858 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.916901 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.916910 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.916923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:04 crc kubenswrapper[4847]: I0218 00:27:04.916932 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:04Z","lastTransitionTime":"2026-02-18T00:27:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.019092 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.019148 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.019159 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.019174 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.019184 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.121695 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.121739 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.121752 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.121771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.121783 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.224957 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.224994 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.225005 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.225022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.225040 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.327147 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.327186 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.327195 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.327209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.327219 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.403579 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:05 crc kubenswrapper[4847]: E0218 00:27:05.403742 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.408582 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:11:47.709548073 +0000 UTC Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.428717 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.428741 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.428749 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.428761 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.428769 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.531864 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.531917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.531937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.531963 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.531981 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.635595 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.635676 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.635693 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.635718 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.635736 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.738663 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.738747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.738770 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.738794 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.738809 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.842138 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.842186 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.842201 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.842219 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.842234 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.945289 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.945341 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.945354 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.945372 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:05 crc kubenswrapper[4847]: I0218 00:27:05.945384 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:05Z","lastTransitionTime":"2026-02-18T00:27:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.048819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.048878 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.048891 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.048908 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.048924 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.140937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.141016 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.141034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.141063 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.141081 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.162394 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:27:06Z is after 2025-08-24T17:21:41Z" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.168217 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.168270 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.168287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.168310 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.168328 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.191989 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:27:06Z is after 2025-08-24T17:21:41Z" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.198135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.198209 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.198232 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.198266 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.198290 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.219128 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:27:06Z is after 2025-08-24T17:21:41Z" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.224673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.224736 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.224759 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.224785 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.224803 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.250212 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:27:06Z is after 2025-08-24T17:21:41Z" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.255471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.255579 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.255597 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.255644 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.255659 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.271800 4847 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:27:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"11f7a530-3cae-485b-860e-571ec4f730a1\\\",\\\"systemUUID\\\":\\\"203b95f6-5cb7-4117-864d-f1073ddd6998\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-18T00:27:06Z is after 2025-08-24T17:21:41Z" Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.272021 4847 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.274673 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.274713 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.274729 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.274747 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.274761 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.376937 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.376998 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.377019 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.377040 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.377054 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.403948 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.403975 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.404100 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.404328 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.404428 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.404800 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.408954 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:56:42.122704161 +0000 UTC Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.479648 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.479685 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.479696 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.479754 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.479767 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.583390 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.583457 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.583482 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.583516 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.583542 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.671803 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.672042 4847 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:27:06 crc kubenswrapper[4847]: E0218 00:27:06.672162 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs podName:1a7318b6-f24d-4785-bd56-ad5ecec493da nodeName:}" failed. No retries permitted until 2026-02-18 00:28:10.672127748 +0000 UTC m=+164.049478730 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs") pod "network-metrics-daemon-5rg76" (UID: "1a7318b6-f24d-4785-bd56-ad5ecec493da") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.686542 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.686627 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.686669 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.686694 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.686717 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.790671 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.790757 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.790780 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.790815 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.790839 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.894267 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.894343 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.894368 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.894398 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.894416 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.997251 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.997297 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.997306 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.997321 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:06 crc kubenswrapper[4847]: I0218 00:27:06.997330 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:06Z","lastTransitionTime":"2026-02-18T00:27:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.101165 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.101223 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.101236 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.101259 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.101307 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.205841 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.205888 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.205900 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.205919 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.205932 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.309285 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.309327 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.309339 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.309358 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.309371 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.404285 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:07 crc kubenswrapper[4847]: E0218 00:27:07.404539 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.409192 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:44:39.939022469 +0000 UTC Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.411857 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.411911 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.411923 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.411941 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.411956 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.481809 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podStartSLOduration=80.481785708 podStartE2EDuration="1m20.481785708s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.469870129 +0000 UTC m=+100.847221061" watchObservedRunningTime="2026-02-18 00:27:07.481785708 +0000 UTC m=+100.859136650" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.482368 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4w5fp" podStartSLOduration=80.482363772 podStartE2EDuration="1m20.482363772s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.481659265 +0000 UTC m=+100.859010217" watchObservedRunningTime="2026-02-18 00:27:07.482363772 +0000 UTC m=+100.859714704" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.515130 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.515170 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.515183 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.515202 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.515215 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.550909 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.550884883 podStartE2EDuration="1m18.550884883s" podCreationTimestamp="2026-02-18 00:25:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.501504446 +0000 UTC m=+100.878855418" watchObservedRunningTime="2026-02-18 00:27:07.550884883 +0000 UTC m=+100.928235845" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.566230 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=79.566200834 podStartE2EDuration="1m19.566200834s" podCreationTimestamp="2026-02-18 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.550482183 +0000 UTC m=+100.927833165" watchObservedRunningTime="2026-02-18 00:27:07.566200834 +0000 UTC m=+100.943551796" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.585582 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wprf4" podStartSLOduration=80.585563283 podStartE2EDuration="1m20.585563283s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.584484627 +0000 UTC m=+100.961835569" watchObservedRunningTime="2026-02-18 00:27:07.585563283 +0000 UTC m=+100.962914245" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.599017 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vk8bl" podStartSLOduration=80.598998469 podStartE2EDuration="1m20.598998469s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.598996599 +0000 UTC m=+100.976347561" watchObservedRunningTime="2026-02-18 00:27:07.598998469 +0000 UTC m=+100.976349401" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.617546 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.617584 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.617594 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.617634 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.617646 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.640343 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wfg4t" podStartSLOduration=80.64032572 podStartE2EDuration="1m20.64032572s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.639848659 +0000 UTC m=+101.017199601" watchObservedRunningTime="2026-02-18 00:27:07.64032572 +0000 UTC m=+101.017676672" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.707561 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.707543319 podStartE2EDuration="1m21.707543319s" podCreationTimestamp="2026-02-18 00:25:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.697099326 +0000 UTC m=+101.074450298" watchObservedRunningTime="2026-02-18 00:27:07.707543319 +0000 UTC m=+101.084894261" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.719372 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.719399 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.719409 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.719422 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.719432 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.725310 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.725293379 podStartE2EDuration="29.725293379s" podCreationTimestamp="2026-02-18 00:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.708113143 +0000 UTC m=+101.085464085" watchObservedRunningTime="2026-02-18 00:27:07.725293379 +0000 UTC m=+101.102644321" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.744892 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.744851773 podStartE2EDuration="48.744851773s" podCreationTimestamp="2026-02-18 00:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.740118498 +0000 UTC m=+101.117469440" watchObservedRunningTime="2026-02-18 00:27:07.744851773 +0000 UTC m=+101.122202715" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.776297 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d9clg" podStartSLOduration=80.776267624 podStartE2EDuration="1m20.776267624s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:07.775124787 +0000 UTC m=+101.152475759" watchObservedRunningTime="2026-02-18 00:27:07.776267624 +0000 UTC m=+101.153618596" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.822160 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.822203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.822212 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.822229 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.822239 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.925706 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.925765 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.925783 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.925807 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:07 crc kubenswrapper[4847]: I0218 00:27:07.925825 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:07Z","lastTransitionTime":"2026-02-18T00:27:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.027766 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.027802 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.027812 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.027826 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.027836 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.130093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.130133 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.130145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.130163 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.130175 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.234010 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.234090 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.234113 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.234144 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.234168 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.337748 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.337799 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.337814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.337833 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.337847 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.404038 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.404145 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:08 crc kubenswrapper[4847]: E0218 00:27:08.404190 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:08 crc kubenswrapper[4847]: E0218 00:27:08.404275 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.404142 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:08 crc kubenswrapper[4847]: E0218 00:27:08.404365 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.410329 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:33:25.493741099 +0000 UTC Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.440680 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.440714 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.440722 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.440736 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.440750 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.543034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.543076 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.543087 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.543103 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.543114 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.645781 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.645819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.645829 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.645846 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.645874 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.747944 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.747989 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.748001 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.748019 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.748031 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.850125 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.850164 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.850172 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.850186 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.850196 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.952566 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.952636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.952653 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.952672 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:08 crc kubenswrapper[4847]: I0218 00:27:08.952683 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:08Z","lastTransitionTime":"2026-02-18T00:27:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.054728 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.054773 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.054786 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.054804 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.054817 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.157334 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.157384 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.157395 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.157412 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.157424 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.259735 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.259773 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.259782 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.259797 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.259806 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.362573 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.362661 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.362680 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.362704 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.362721 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.403622 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:09 crc kubenswrapper[4847]: E0218 00:27:09.403811 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.410759 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:08:55.955091637 +0000 UTC Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.465210 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.465266 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.465284 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.465308 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.465326 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.567649 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.567691 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.567700 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.567714 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.567725 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.670095 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.670136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.670148 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.670164 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.670176 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.772497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.772577 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.773096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.773174 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.773634 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.876502 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.876535 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.876543 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.876556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.876566 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.979586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.979801 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.979827 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.979851 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:09 crc kubenswrapper[4847]: I0218 00:27:09.979870 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:09Z","lastTransitionTime":"2026-02-18T00:27:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.083287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.083353 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.083367 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.083385 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.083398 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.186758 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.186819 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.186837 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.186862 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.186878 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.291045 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.291102 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.291117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.291139 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.291154 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.394497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.394562 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.394579 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.394636 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.394656 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.404045 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.404080 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.404114 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:10 crc kubenswrapper[4847]: E0218 00:27:10.404229 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:10 crc kubenswrapper[4847]: E0218 00:27:10.404400 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:10 crc kubenswrapper[4847]: E0218 00:27:10.404932 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.411100 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:20:10.783071747 +0000 UTC Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.497020 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.497058 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.497069 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.497085 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.497097 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.600670 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.600721 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.600743 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.600771 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.600796 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.704488 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.704571 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.704595 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.704659 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.704682 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.807797 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.807863 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.807881 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.807906 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.807925 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.911405 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.911445 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.911455 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.911472 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:10 crc kubenswrapper[4847]: I0218 00:27:10.911483 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:10Z","lastTransitionTime":"2026-02-18T00:27:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.014242 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.014287 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.014299 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.014318 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.014334 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.117428 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.117499 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.117514 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.117540 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.117556 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.221023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.221077 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.221089 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.221110 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.221122 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.324157 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.324199 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.324214 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.324232 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.324244 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.404081 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:11 crc kubenswrapper[4847]: E0218 00:27:11.404674 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.411456 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 23:22:40.40418805 +0000 UTC Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.429471 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.429589 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.429676 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.429763 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.429798 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.534398 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.534474 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.534491 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.534517 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.534543 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.638284 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.638367 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.638389 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.638415 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.638434 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.741533 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.741628 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.741649 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.741677 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.741694 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.844825 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.844917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.844938 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.844968 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.844990 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.948772 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.948840 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.948857 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.948883 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:11 crc kubenswrapper[4847]: I0218 00:27:11.948902 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:11Z","lastTransitionTime":"2026-02-18T00:27:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.052364 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.052436 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.052455 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.052486 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.052508 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.156915 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.156983 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.157003 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.157033 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.157052 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.261105 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.261182 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.261203 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.261231 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.261252 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.365990 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.366091 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.366115 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.366152 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.366177 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.404332 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.404374 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.404480 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:12 crc kubenswrapper[4847]: E0218 00:27:12.404584 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:12 crc kubenswrapper[4847]: E0218 00:27:12.404701 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:12 crc kubenswrapper[4847]: E0218 00:27:12.404763 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.412497 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:06:19.759207149 +0000 UTC Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.469157 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.469188 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.469197 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.469213 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.469222 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.572189 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.572260 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.572325 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.572352 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.572373 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.675268 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.675314 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.675328 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.675345 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.675358 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.778971 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.779047 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.779066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.779097 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.779125 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.882883 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.882944 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.882960 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.882985 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.883006 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.986485 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.986555 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.986580 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.986649 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:12 crc kubenswrapper[4847]: I0218 00:27:12.986675 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:12Z","lastTransitionTime":"2026-02-18T00:27:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.090093 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.090187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.090206 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.090233 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.090255 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.193556 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.193642 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.193660 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.193702 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.193720 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.297258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.297321 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.297338 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.297380 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.297419 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.399964 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.400022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.400038 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.400069 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.400084 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.403155 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:13 crc kubenswrapper[4847]: E0218 00:27:13.403252 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.413457 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 11:59:40.684410716 +0000 UTC Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.503744 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.503786 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.503796 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.503812 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.503823 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.606503 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.606551 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.606561 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.606577 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.606588 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.710046 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.710107 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.710126 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.710149 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.710163 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.814398 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.814483 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.814511 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.814548 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.814581 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.917973 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.918055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.918074 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.918105 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:13 crc kubenswrapper[4847]: I0218 00:27:13.918123 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:13Z","lastTransitionTime":"2026-02-18T00:27:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.020625 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.020686 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.020704 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.020726 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.020741 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.123022 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.123055 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.123066 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.123081 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.123093 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.225835 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.225898 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.225915 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.225944 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.225981 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.329814 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.329873 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.329890 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.329914 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.329935 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.403955 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.403973 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:14 crc kubenswrapper[4847]: E0218 00:27:14.404250 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:14 crc kubenswrapper[4847]: E0218 00:27:14.404323 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.404092 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:14 crc kubenswrapper[4847]: E0218 00:27:14.404525 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.413969 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:30:31.771503746 +0000 UTC Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.434012 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.434167 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.434189 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.434214 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.434234 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.538034 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.538137 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.538198 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.538230 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.538252 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.641740 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.641832 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.641857 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.641892 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.641915 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.745105 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.745169 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.745188 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.745215 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.745235 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.849015 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.849119 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.849145 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.849176 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.849200 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.953190 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.953322 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.953345 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.953375 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:14 crc kubenswrapper[4847]: I0218 00:27:14.953395 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:14Z","lastTransitionTime":"2026-02-18T00:27:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.058031 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.058099 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.058121 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.058162 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.058184 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.161990 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.162062 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.162087 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.162117 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.162138 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.265693 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.266096 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.266193 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.266300 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.266401 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.369497 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.369586 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.369660 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.369703 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.369730 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.403877 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:15 crc kubenswrapper[4847]: E0218 00:27:15.404078 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.414875 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:09:12.78439041 +0000 UTC Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.473554 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.473653 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.473667 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.473692 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.473736 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.578058 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.578136 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.578156 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.578186 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.578204 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.681258 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.681360 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.681386 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.681420 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.681446 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.785075 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.785131 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.785146 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.785169 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.785192 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.889069 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.889153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.889173 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.889206 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.889228 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.993319 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.993399 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.993419 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.993447 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:15 crc kubenswrapper[4847]: I0218 00:27:15.993760 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:15Z","lastTransitionTime":"2026-02-18T00:27:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.096201 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.096249 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.096262 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.096285 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.096300 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.199079 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.199153 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.199175 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.199204 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.199227 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.302917 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.302980 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.302998 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.303023 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.303041 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.404119 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.404239 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.404257 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:16 crc kubenswrapper[4847]: E0218 00:27:16.404560 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:16 crc kubenswrapper[4847]: E0218 00:27:16.404779 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:16 crc kubenswrapper[4847]: E0218 00:27:16.405456 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406022 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:27:16 crc kubenswrapper[4847]: E0218 00:27:16.406382 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-bxm6w_openshift-ovn-kubernetes(86e5946b-870b-46f1-8923-4a8abd64da45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406538 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406571 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406583 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406641 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.406658 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.416061 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:19:16.337937312 +0000 UTC Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.510038 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.510125 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.510155 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.510187 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.510210 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.557135 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.557208 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.557225 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.557253 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.557275 4847 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:27:16Z","lastTransitionTime":"2026-02-18T00:27:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.662182 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7"] Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.662625 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.664567 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.665099 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.665973 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.676082 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.804812 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc78bf19-732c-4e5f-aee0-66ec38de4683-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.804959 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc78bf19-732c-4e5f-aee0-66ec38de4683-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.805017 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.805071 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.805104 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc78bf19-732c-4e5f-aee0-66ec38de4683-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.906480 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc78bf19-732c-4e5f-aee0-66ec38de4683-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907209 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907271 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907386 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc78bf19-732c-4e5f-aee0-66ec38de4683-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907430 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc78bf19-732c-4e5f-aee0-66ec38de4683-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.907716 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc78bf19-732c-4e5f-aee0-66ec38de4683-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.909000 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc78bf19-732c-4e5f-aee0-66ec38de4683-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.918205 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc78bf19-732c-4e5f-aee0-66ec38de4683-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.932810 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc78bf19-732c-4e5f-aee0-66ec38de4683-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nzcz7\" (UID: \"dc78bf19-732c-4e5f-aee0-66ec38de4683\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:16 crc kubenswrapper[4847]: I0218 00:27:16.980157 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" Feb 18 00:27:17 crc kubenswrapper[4847]: I0218 00:27:17.042073 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" event={"ID":"dc78bf19-732c-4e5f-aee0-66ec38de4683","Type":"ContainerStarted","Data":"dd7902cb8599daa7dae6b93b3bc71056c80e73105b232f00d70b20bebf393c68"} Feb 18 00:27:17 crc kubenswrapper[4847]: I0218 00:27:17.403944 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:17 crc kubenswrapper[4847]: E0218 00:27:17.405982 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:17 crc kubenswrapper[4847]: I0218 00:27:17.417173 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:46:55.700882881 +0000 UTC Feb 18 00:27:17 crc kubenswrapper[4847]: I0218 00:27:17.417271 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 18 00:27:17 crc kubenswrapper[4847]: I0218 00:27:17.430786 4847 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 18 00:27:18 crc kubenswrapper[4847]: I0218 00:27:18.048146 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" event={"ID":"dc78bf19-732c-4e5f-aee0-66ec38de4683","Type":"ContainerStarted","Data":"c5b6d7a9bb29f8553b6e6a93f9f47eb3e5b1277d3b2ba58331b1c5232787909f"} Feb 18 00:27:18 crc kubenswrapper[4847]: I0218 00:27:18.403469 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:18 crc kubenswrapper[4847]: I0218 00:27:18.403478 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:18 crc kubenswrapper[4847]: I0218 00:27:18.403714 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:18 crc kubenswrapper[4847]: E0218 00:27:18.403763 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:18 crc kubenswrapper[4847]: E0218 00:27:18.404012 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:18 crc kubenswrapper[4847]: E0218 00:27:18.404149 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:19 crc kubenswrapper[4847]: I0218 00:27:19.404397 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:19 crc kubenswrapper[4847]: E0218 00:27:19.404648 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:20 crc kubenswrapper[4847]: I0218 00:27:20.403876 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:20 crc kubenswrapper[4847]: I0218 00:27:20.403978 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:20 crc kubenswrapper[4847]: E0218 00:27:20.404088 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:20 crc kubenswrapper[4847]: E0218 00:27:20.404216 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:20 crc kubenswrapper[4847]: I0218 00:27:20.404347 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:20 crc kubenswrapper[4847]: E0218 00:27:20.404488 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:21 crc kubenswrapper[4847]: I0218 00:27:21.404208 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:21 crc kubenswrapper[4847]: E0218 00:27:21.404381 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.069999 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/1.log" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.071491 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/0.log" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.071742 4847 generic.go:334] "Generic (PLEG): container finished" podID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" containerID="f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482" exitCode=1 Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.071860 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerDied","Data":"f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482"} Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.072111 4847 scope.go:117] "RemoveContainer" containerID="61b79ee95e48756d8d3aaa198d0aae4ff540e5a7ab33c3236d21f63c072bd8f6" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.072855 4847 scope.go:117] "RemoveContainer" containerID="f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482" Feb 18 00:27:22 crc kubenswrapper[4847]: E0218 00:27:22.073127 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wprf4_openshift-multus(f2eb9a65-88b5-49d1-885a-98c60c1283b4)\"" pod="openshift-multus/multus-wprf4" podUID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.099336 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nzcz7" podStartSLOduration=95.099314178 podStartE2EDuration="1m35.099314178s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:18.075169443 +0000 UTC m=+111.452520385" watchObservedRunningTime="2026-02-18 00:27:22.099314178 +0000 UTC m=+115.476665120" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.404155 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.404319 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:22 crc kubenswrapper[4847]: I0218 00:27:22.404384 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:22 crc kubenswrapper[4847]: E0218 00:27:22.404415 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:22 crc kubenswrapper[4847]: E0218 00:27:22.404529 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:22 crc kubenswrapper[4847]: E0218 00:27:22.404698 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:23 crc kubenswrapper[4847]: I0218 00:27:23.077961 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/1.log" Feb 18 00:27:23 crc kubenswrapper[4847]: I0218 00:27:23.403351 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:23 crc kubenswrapper[4847]: E0218 00:27:23.403587 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:24 crc kubenswrapper[4847]: I0218 00:27:24.403518 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:24 crc kubenswrapper[4847]: I0218 00:27:24.403709 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:24 crc kubenswrapper[4847]: E0218 00:27:24.403797 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:24 crc kubenswrapper[4847]: I0218 00:27:24.403915 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:24 crc kubenswrapper[4847]: E0218 00:27:24.404220 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:24 crc kubenswrapper[4847]: E0218 00:27:24.404408 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:25 crc kubenswrapper[4847]: I0218 00:27:25.403987 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:25 crc kubenswrapper[4847]: E0218 00:27:25.404174 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:26 crc kubenswrapper[4847]: I0218 00:27:26.404336 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:26 crc kubenswrapper[4847]: I0218 00:27:26.404451 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:26 crc kubenswrapper[4847]: I0218 00:27:26.404344 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:26 crc kubenswrapper[4847]: E0218 00:27:26.404666 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:26 crc kubenswrapper[4847]: E0218 00:27:26.404879 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:26 crc kubenswrapper[4847]: E0218 00:27:26.405091 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:27 crc kubenswrapper[4847]: E0218 00:27:27.397231 4847 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 18 00:27:27 crc kubenswrapper[4847]: I0218 00:27:27.403672 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:27 crc kubenswrapper[4847]: E0218 00:27:27.404925 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:27 crc kubenswrapper[4847]: E0218 00:27:27.536793 4847 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:27:28 crc kubenswrapper[4847]: I0218 00:27:28.404375 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:28 crc kubenswrapper[4847]: I0218 00:27:28.404469 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:28 crc kubenswrapper[4847]: I0218 00:27:28.404591 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:28 crc kubenswrapper[4847]: E0218 00:27:28.404702 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:28 crc kubenswrapper[4847]: E0218 00:27:28.404883 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:28 crc kubenswrapper[4847]: E0218 00:27:28.405063 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:29 crc kubenswrapper[4847]: I0218 00:27:29.404260 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:29 crc kubenswrapper[4847]: E0218 00:27:29.404489 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:30 crc kubenswrapper[4847]: I0218 00:27:30.403536 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:30 crc kubenswrapper[4847]: I0218 00:27:30.403556 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:30 crc kubenswrapper[4847]: I0218 00:27:30.403814 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:30 crc kubenswrapper[4847]: E0218 00:27:30.404956 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:30 crc kubenswrapper[4847]: E0218 00:27:30.406018 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:30 crc kubenswrapper[4847]: E0218 00:27:30.406173 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:31 crc kubenswrapper[4847]: I0218 00:27:31.403982 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:31 crc kubenswrapper[4847]: E0218 00:27:31.404687 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:31 crc kubenswrapper[4847]: I0218 00:27:31.405275 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.113768 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/3.log" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.116829 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerStarted","Data":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.117239 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.145563 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podStartSLOduration=105.145538487 podStartE2EDuration="1m45.145538487s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:32.144954513 +0000 UTC m=+125.522305455" watchObservedRunningTime="2026-02-18 00:27:32.145538487 +0000 UTC m=+125.522889429" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.311905 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5rg76"] Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.312089 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:32 crc kubenswrapper[4847]: E0218 00:27:32.312278 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.403855 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:32 crc kubenswrapper[4847]: I0218 00:27:32.403963 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:32 crc kubenswrapper[4847]: E0218 00:27:32.404029 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:32 crc kubenswrapper[4847]: E0218 00:27:32.404177 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:32 crc kubenswrapper[4847]: E0218 00:27:32.538430 4847 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:27:33 crc kubenswrapper[4847]: I0218 00:27:33.403485 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:33 crc kubenswrapper[4847]: E0218 00:27:33.403693 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:34 crc kubenswrapper[4847]: I0218 00:27:34.403958 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:34 crc kubenswrapper[4847]: I0218 00:27:34.404022 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:34 crc kubenswrapper[4847]: I0218 00:27:34.404051 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:34 crc kubenswrapper[4847]: E0218 00:27:34.404110 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:34 crc kubenswrapper[4847]: E0218 00:27:34.404216 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:34 crc kubenswrapper[4847]: E0218 00:27:34.404309 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:35 crc kubenswrapper[4847]: I0218 00:27:35.403397 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:35 crc kubenswrapper[4847]: E0218 00:27:35.403672 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:36 crc kubenswrapper[4847]: I0218 00:27:36.403667 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:36 crc kubenswrapper[4847]: I0218 00:27:36.403667 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:36 crc kubenswrapper[4847]: E0218 00:27:36.404210 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:36 crc kubenswrapper[4847]: I0218 00:27:36.403730 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:36 crc kubenswrapper[4847]: E0218 00:27:36.404358 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:36 crc kubenswrapper[4847]: E0218 00:27:36.404553 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:37 crc kubenswrapper[4847]: I0218 00:27:37.403727 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:37 crc kubenswrapper[4847]: E0218 00:27:37.405170 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:37 crc kubenswrapper[4847]: I0218 00:27:37.405845 4847 scope.go:117] "RemoveContainer" containerID="f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482" Feb 18 00:27:37 crc kubenswrapper[4847]: E0218 00:27:37.540349 4847 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:27:38 crc kubenswrapper[4847]: I0218 00:27:38.146651 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/1.log" Feb 18 00:27:38 crc kubenswrapper[4847]: I0218 00:27:38.146748 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerStarted","Data":"61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb"} Feb 18 00:27:38 crc kubenswrapper[4847]: I0218 00:27:38.404247 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:38 crc kubenswrapper[4847]: I0218 00:27:38.404247 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:38 crc kubenswrapper[4847]: E0218 00:27:38.404494 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:38 crc kubenswrapper[4847]: I0218 00:27:38.404292 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:38 crc kubenswrapper[4847]: E0218 00:27:38.404838 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:38 crc kubenswrapper[4847]: E0218 00:27:38.405059 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:39 crc kubenswrapper[4847]: I0218 00:27:39.404421 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:39 crc kubenswrapper[4847]: E0218 00:27:39.404663 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:40 crc kubenswrapper[4847]: I0218 00:27:40.403899 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:40 crc kubenswrapper[4847]: I0218 00:27:40.403981 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:40 crc kubenswrapper[4847]: E0218 00:27:40.404091 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:40 crc kubenswrapper[4847]: I0218 00:27:40.403996 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:40 crc kubenswrapper[4847]: E0218 00:27:40.404201 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:40 crc kubenswrapper[4847]: E0218 00:27:40.404355 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:41 crc kubenswrapper[4847]: I0218 00:27:41.403917 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:41 crc kubenswrapper[4847]: E0218 00:27:41.405024 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 18 00:27:42 crc kubenswrapper[4847]: I0218 00:27:42.403813 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:42 crc kubenswrapper[4847]: I0218 00:27:42.403823 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:42 crc kubenswrapper[4847]: E0218 00:27:42.404043 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 18 00:27:42 crc kubenswrapper[4847]: I0218 00:27:42.403844 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:42 crc kubenswrapper[4847]: E0218 00:27:42.404188 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5rg76" podUID="1a7318b6-f24d-4785-bd56-ad5ecec493da" Feb 18 00:27:42 crc kubenswrapper[4847]: E0218 00:27:42.404304 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 18 00:27:43 crc kubenswrapper[4847]: I0218 00:27:43.403509 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:43 crc kubenswrapper[4847]: I0218 00:27:43.405580 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 00:27:43 crc kubenswrapper[4847]: I0218 00:27:43.406465 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.403718 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.403723 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.403758 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.407953 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.408224 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.409425 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 00:27:44 crc kubenswrapper[4847]: I0218 00:27:44.409549 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.499007 4847 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.552791 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29522880-98mw7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.554279 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.557466 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.559741 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.560970 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.559743 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.570073 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.570100 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.576902 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.599798 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d4c9w"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.600574 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.601116 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.601582 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.602226 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.604849 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.605480 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.605746 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.605987 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606151 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606654 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606785 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606156 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.607130 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606661 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.606685 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.609238 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.609914 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.610011 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.610206 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.613765 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.614428 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.614564 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.614663 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.614668 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.614742 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.615030 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.615141 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.615336 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.615822 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8d9lm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616004 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616532 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616557 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616710 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616927 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.616943 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.617312 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.617503 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pz8zw"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.617682 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.617932 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.618812 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.618947 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619049 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619154 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619251 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619386 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619481 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619678 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619718 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.619794 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.620032 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.620255 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.620393 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.620573 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.620596 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.622258 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.627166 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ds8pk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.627580 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.628021 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-sc5ff"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.628395 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.629277 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.629645 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.629861 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.630043 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.630214 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.635791 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.636567 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.636892 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.637238 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.637669 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.645323 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.646908 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.647036 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.647552 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.648874 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.649082 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.649177 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.653670 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.653769 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.654151 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.654271 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.655320 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.660407 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.663406 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.664847 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.664588 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.672247 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.691722 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.694090 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.694843 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.695858 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.695864 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.698099 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pjq6b"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.700109 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.700147 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.701838 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hc9j8"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.702445 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.702538 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.705697 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.707107 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.708116 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.708422 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5xpzg"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.708914 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.709187 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.709595 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.709746 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.709872 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.711280 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.711428 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.712138 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.712751 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.712790 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.715055 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.715162 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.715882 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.716061 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.716236 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.716471 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.716752 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.716855 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.717017 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.717043 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.717156 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.718090 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-98mw7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.718226 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.719013 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.719161 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.719381 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.719939 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.720456 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.720842 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.720997 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.721057 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.721003 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.722955 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.724034 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.724961 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.725021 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.725982 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.726768 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.726898 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.726934 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727038 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727243 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727366 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727380 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727733 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727745 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727763 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727819 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.727905 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.728177 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.728292 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.728916 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.730086 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6t5r"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747389 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747452 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747484 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qb7\" (UniqueName: \"kubernetes.io/projected/d29f70a5-3d87-465a-a052-922f9616ac9d-kube-api-access-h5qb7\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747518 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6hwx\" (UniqueName: \"kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747872 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747902 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747923 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-encryption-config\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747968 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29f70a5-3d87-465a-a052-922f9616ac9d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.747993 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748018 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7ql\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-kube-api-access-5j7ql\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748042 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f02ce80-0362-4208-bfcf-3f68956dd6f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748073 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748275 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-config\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748309 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c5f9\" (UniqueName: \"kubernetes.io/projected/197d7a69-19e6-4c08-b68d-f21073ad7487-kube-api-access-5c5f9\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748357 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748390 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-serving-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748437 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit-dir\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748471 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748543 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfmhz\" (UniqueName: \"kubernetes.io/projected/44555695-834e-4ffc-bee2-b16d7adf6fbc-kube-api-access-zfmhz\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748577 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-node-pullsecrets\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748748 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748794 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748821 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbwj\" (UniqueName: \"kubernetes.io/projected/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-kube-api-access-rkbwj\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748866 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97682d07-0505-453d-afc6-2d9c8dfc4638-serving-cert\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748895 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1796e7d1-9237-4700-ba09-c5f1bd74e457-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748937 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748962 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.748988 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749221 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749271 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749299 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749361 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749434 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749498 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fkl\" (UniqueName: \"kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749532 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749582 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749645 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-config\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749889 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-image-import-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.749955 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750008 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-machine-approver-tls\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750202 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750251 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-client\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750275 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-dir\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750548 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750620 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwkk\" (UniqueName: \"kubernetes.io/projected/7f02ce80-0362-4208-bfcf-3f68956dd6f2-kube-api-access-gvwkk\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.750661 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751069 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-policies\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751137 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-serving-cert\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751231 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751377 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751512 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-trusted-ca\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751808 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.751912 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-serving-cert\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752011 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752044 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-auth-proxy-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752132 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1796e7d1-9237-4700-ba09-c5f1bd74e457-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752161 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-client\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752216 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-images\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752330 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/197d7a69-19e6-4c08-b68d-f21073ad7487-serving-cert\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752384 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752412 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752541 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f02ce80-0362-4208-bfcf-3f68956dd6f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752570 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752634 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwcz2\" (UniqueName: \"kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.752674 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-config\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.753627 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.753970 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbzwr\" (UniqueName: \"kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754077 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754108 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754280 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-encryption-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754452 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754497 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754534 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-service-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jqp\" (UniqueName: \"kubernetes.io/projected/97682d07-0505-453d-afc6-2d9c8dfc4638-kube-api-access-q7jqp\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754644 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29f70a5-3d87-465a-a052-922f9616ac9d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754773 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754858 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8952z\" (UniqueName: \"kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754893 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spt2d\" (UniqueName: \"kubernetes.io/projected/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-kube-api-access-spt2d\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754924 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvn76\" (UniqueName: \"kubernetes.io/projected/40549f47-53f3-4990-a2b0-921413ba5862-kube-api-access-zvn76\") pod \"downloads-7954f5f757-sc5ff\" (UID: \"40549f47-53f3-4990-a2b0-921413ba5862\") " pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754959 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tctqj\" (UniqueName: \"kubernetes.io/projected/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-kube-api-access-tctqj\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.754998 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44555695-834e-4ffc-bee2-b16d7adf6fbc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.757675 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.758967 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.759057 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.757726 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.768184 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.770446 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.771521 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.772873 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.773542 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.774041 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.775957 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbftk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.777166 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.781008 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.781685 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.782673 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.783280 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.783529 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.783766 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.783993 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.784849 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.785722 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.785967 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.786764 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.787006 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4hc85"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.787545 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.788297 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.790301 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.792364 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ds8pk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.792403 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.793646 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.794888 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pz8zw"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.796219 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d4c9w"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.797274 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.797577 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.798995 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8d9lm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.800110 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sc5ff"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.801464 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.804138 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.805594 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6t5r"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.806486 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.808310 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.808481 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.809534 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbftk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.810552 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.811563 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5xpzg"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.812643 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.813633 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-n67v9"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.815469 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.815825 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j66wn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.815855 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.817417 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.817728 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.817758 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.817864 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.818828 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.819889 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.820949 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.821971 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.827571 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pjq6b"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.830086 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.832791 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.834021 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.835457 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.836584 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.837424 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.837912 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n67v9"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.839689 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.841438 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j66wn"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.843465 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4hc85"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.845102 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4bt25"] Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.846572 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856449 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-client\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856564 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-dir\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856699 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca77c22-f027-41f1-a8dd-f40048047f45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856807 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmhp\" (UniqueName: \"kubernetes.io/projected/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-kube-api-access-hpmhp\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856658 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-dir\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.856982 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857069 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwkk\" (UniqueName: \"kubernetes.io/projected/7f02ce80-0362-4208-bfcf-3f68956dd6f2-kube-api-access-gvwkk\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857119 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857156 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-tmpfs\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857192 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bwm5\" (UniqueName: \"kubernetes.io/projected/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-kube-api-access-4bwm5\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857228 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857258 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-policies\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857288 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-serving-cert\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857324 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857359 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-trusted-ca\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857401 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-config\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857437 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857475 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-serving-cert\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857508 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857546 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857580 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ca77c22-f027-41f1-a8dd-f40048047f45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857633 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-auth-proxy-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857681 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1796e7d1-9237-4700-ba09-c5f1bd74e457-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857716 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-client\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857747 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-images\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.857776 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/197d7a69-19e6-4c08-b68d-f21073ad7487-serving-cert\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.858182 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.858230 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.858277 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.858639 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.859434 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-images\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.862017 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-client\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.862170 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-audit-policies\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.862131 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-trusted-ca\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.864340 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-serving-cert\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.864710 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1796e7d1-9237-4700-ba09-c5f1bd74e457-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.867921 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.867969 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f78l\" (UniqueName: \"kubernetes.io/projected/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-kube-api-access-7f78l\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868126 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f02ce80-0362-4208-bfcf-3f68956dd6f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868211 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwcz2\" (UniqueName: \"kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868324 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-config\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868366 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca77c22-f027-41f1-a8dd-f40048047f45-config\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868385 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868519 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868553 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbzwr\" (UniqueName: \"kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868588 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868640 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868669 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-encryption-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868706 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868725 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868745 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-service-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869034 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-auth-proxy-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869193 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869039 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7f02ce80-0362-4208-bfcf-3f68956dd6f2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869708 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869789 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.869992 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870203 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-config\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870397 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-service-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.868765 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jqp\" (UniqueName: \"kubernetes.io/projected/97682d07-0505-453d-afc6-2d9c8dfc4638-kube-api-access-q7jqp\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870477 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870501 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870536 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/595c45e9-e480-4930-b3b6-5075f16629a9-proxy-tls\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870562 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29f70a5-3d87-465a-a052-922f9616ac9d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870582 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvn76\" (UniqueName: \"kubernetes.io/projected/40549f47-53f3-4990-a2b0-921413ba5862-kube-api-access-zvn76\") pod \"downloads-7954f5f757-sc5ff\" (UID: \"40549f47-53f3-4990-a2b0-921413ba5862\") " pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870623 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tctqj\" (UniqueName: \"kubernetes.io/projected/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-kube-api-access-tctqj\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870643 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44555695-834e-4ffc-bee2-b16d7adf6fbc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870661 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-images\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870681 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870718 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-metrics-tls\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870712 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870737 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8952z\" (UniqueName: \"kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870756 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spt2d\" (UniqueName: \"kubernetes.io/projected/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-kube-api-access-spt2d\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870793 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870813 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cktp\" (UniqueName: \"kubernetes.io/projected/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-kube-api-access-4cktp\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870907 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870758 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870990 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jqw\" (UniqueName: \"kubernetes.io/projected/595c45e9-e480-4930-b3b6-5075f16629a9-kube-api-access-42jqw\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870593 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.871715 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-serving-cert\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.870995 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872133 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxwgt\" (UniqueName: \"kubernetes.io/projected/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-kube-api-access-wxwgt\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872192 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6hwx\" (UniqueName: \"kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872230 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872194 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-encryption-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872278 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872327 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qb7\" (UniqueName: \"kubernetes.io/projected/d29f70a5-3d87-465a-a052-922f9616ac9d-kube-api-access-h5qb7\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872364 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-encryption-config\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872400 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872437 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29f70a5-3d87-465a-a052-922f9616ac9d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872467 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872500 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cp45\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-kube-api-access-2cp45\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872554 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7ql\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-kube-api-access-5j7ql\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872586 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f02ce80-0362-4208-bfcf-3f68956dd6f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872629 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872663 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-config\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872690 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c5f9\" (UniqueName: \"kubernetes.io/projected/197d7a69-19e6-4c08-b68d-f21073ad7487-kube-api-access-5c5f9\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872722 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872754 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-serving-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872787 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit-dir\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872828 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872870 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtkz\" (UniqueName: \"kubernetes.io/projected/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-kube-api-access-thtkz\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872906 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872949 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfmhz\" (UniqueName: \"kubernetes.io/projected/44555695-834e-4ffc-bee2-b16d7adf6fbc-kube-api-access-zfmhz\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.872985 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-node-pullsecrets\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873019 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873053 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873061 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-trusted-ca\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873139 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873172 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873221 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbwj\" (UniqueName: \"kubernetes.io/projected/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-kube-api-access-rkbwj\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873252 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97682d07-0505-453d-afc6-2d9c8dfc4638-serving-cert\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873285 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-default-certificate\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873330 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1796e7d1-9237-4700-ba09-c5f1bd74e457-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873366 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873400 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873411 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873433 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873466 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-serving-cert\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873497 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-client\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873525 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873555 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873587 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873633 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873665 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqs7m\" (UniqueName: \"kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873693 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-service-ca-bundle\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873722 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873753 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fkl\" (UniqueName: \"kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873790 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873822 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873857 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873887 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-metrics-tls\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873903 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit-dir\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873917 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-metrics-certs\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873925 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/197d7a69-19e6-4c08-b68d-f21073ad7487-serving-cert\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873956 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-config\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873989 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874022 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-service-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874062 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-image-import-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874093 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874110 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874124 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-machine-approver-tls\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874158 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874186 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-stats-auth\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874643 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/197d7a69-19e6-4c08-b68d-f21073ad7487-config\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874834 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874911 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.874986 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-node-pullsecrets\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.873860 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-serving-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.875453 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-encryption-config\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.875669 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.876034 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-config\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.876512 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.876990 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.877097 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44555695-834e-4ffc-bee2-b16d7adf6fbc-config\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.877402 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.877730 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.877995 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/44555695-834e-4ffc-bee2-b16d7adf6fbc-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.879805 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-machine-approver-tls\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.879973 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f02ce80-0362-4208-bfcf-3f68956dd6f2-serving-cert\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.880241 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97682d07-0505-453d-afc6-2d9c8dfc4638-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.880591 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-audit\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.880738 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.880916 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881061 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881088 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881187 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881422 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881490 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881672 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-config\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881709 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1796e7d1-9237-4700-ba09-c5f1bd74e457-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881880 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-image-import-ca\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.881968 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d29f70a5-3d87-465a-a052-922f9616ac9d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.882316 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-etcd-client\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.882456 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d29f70a5-3d87-465a-a052-922f9616ac9d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.882892 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.883028 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97682d07-0505-453d-afc6-2d9c8dfc4638-serving-cert\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.883826 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.886100 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.917681 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.937146 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.956999 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.974889 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-config\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.974926 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.974944 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.974961 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ca77c22-f027-41f1-a8dd-f40048047f45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.974996 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f78l\" (UniqueName: \"kubernetes.io/projected/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-kube-api-access-7f78l\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975161 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca77c22-f027-41f1-a8dd-f40048047f45-config\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975200 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975235 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975275 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/595c45e9-e480-4930-b3b6-5075f16629a9-proxy-tls\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975295 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975314 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-metrics-tls\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975379 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-images\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975407 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cktp\" (UniqueName: \"kubernetes.io/projected/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-kube-api-access-4cktp\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975430 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jqw\" (UniqueName: \"kubernetes.io/projected/595c45e9-e480-4930-b3b6-5075f16629a9-kube-api-access-42jqw\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975452 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxwgt\" (UniqueName: \"kubernetes.io/projected/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-kube-api-access-wxwgt\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975485 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cp45\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-kube-api-access-2cp45\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975562 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975882 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtkz\" (UniqueName: \"kubernetes.io/projected/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-kube-api-access-thtkz\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.975923 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.976028 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-trusted-ca\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.976863 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977343 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-images\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977131 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-trusted-ca\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977204 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-default-certificate\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977494 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-serving-cert\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977542 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-client\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977952 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqs7m\" (UniqueName: \"kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.977987 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-service-ca-bundle\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978025 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-metrics-tls\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978045 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-metrics-certs\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978720 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978759 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978784 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-service-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978826 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-stats-auth\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978850 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978784 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-service-ca-bundle\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978876 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca77c22-f027-41f1-a8dd-f40048047f45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978903 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpmhp\" (UniqueName: \"kubernetes.io/projected/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-kube-api-access-hpmhp\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978940 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-tmpfs\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.978960 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bwm5\" (UniqueName: \"kubernetes.io/projected/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-kube-api-access-4bwm5\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.979269 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/595c45e9-e480-4930-b3b6-5075f16629a9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.979425 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-metrics-tls\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.979547 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-tmpfs\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.979582 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/595c45e9-e480-4930-b3b6-5075f16629a9-proxy-tls\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.981405 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-default-certificate\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.981572 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-serving-cert\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:47 crc kubenswrapper[4847]: I0218 00:27:47.997644 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.002928 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-stats-auth\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.018647 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.031300 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-metrics-certs\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.037528 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.056966 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.066748 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-config\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.077736 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.082352 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-client\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.098265 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.124167 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.137974 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.139841 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-service-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.157619 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.177699 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.186319 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-etcd-ca\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.197898 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.218491 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.265960 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.266736 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.277883 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.297810 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.310194 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.317480 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.338407 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.347388 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.357446 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.378571 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.391359 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-metrics-tls\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.397117 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.417203 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.437113 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.457406 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.477918 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.497811 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.517520 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.538190 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.558166 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.567728 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca77c22-f027-41f1-a8dd-f40048047f45-config\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.577454 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.584061 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ca77c22-f027-41f1-a8dd-f40048047f45-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.597947 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.618374 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.638132 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.658966 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.677911 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.697723 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.717746 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.735659 4847 request.go:700] Waited for 1.008049199s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.738642 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.754521 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.758820 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.789110 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.791796 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.798663 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.818243 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.838555 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.858559 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.872863 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.878109 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.898819 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.918772 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.924657 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-apiservice-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.932776 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-webhook-cert\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.958324 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.978569 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 00:27:48 crc kubenswrapper[4847]: I0218 00:27:48.998915 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.019527 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.038331 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.057460 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.078027 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.097910 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.119401 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.139828 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.157971 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.178376 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.197886 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.218487 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.238808 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.257922 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.269526 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.278313 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.297678 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.316797 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.337663 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.357166 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.378264 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.397365 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.419230 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.438857 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.457514 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.478438 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.498384 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.518130 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.538537 4847 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.558358 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.579371 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.598432 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.618059 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.639253 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.685227 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwkk\" (UniqueName: \"kubernetes.io/projected/7f02ce80-0362-4208-bfcf-3f68956dd6f2-kube-api-access-gvwkk\") pod \"openshift-config-operator-7777fb866f-hc5ks\" (UID: \"7f02ce80-0362-4208-bfcf-3f68956dd6f2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.703993 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.719368 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwcz2\" (UniqueName: \"kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2\") pod \"controller-manager-879f6c89f-tmbbz\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.729977 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbzwr\" (UniqueName: \"kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr\") pod \"route-controller-manager-6576b87f9c-c7dv2\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.735720 4847 request.go:700] Waited for 1.865474741s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.752005 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jqp\" (UniqueName: \"kubernetes.io/projected/97682d07-0505-453d-afc6-2d9c8dfc4638-kube-api-access-q7jqp\") pod \"authentication-operator-69f744f599-ds8pk\" (UID: \"97682d07-0505-453d-afc6-2d9c8dfc4638\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.780908 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8952z\" (UniqueName: \"kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z\") pod \"image-pruner-29522880-98mw7\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.791640 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spt2d\" (UniqueName: \"kubernetes.io/projected/cbf93e33-d1c5-4eff-987d-7563a4bd5e45-kube-api-access-spt2d\") pod \"apiserver-76f77b778f-d4c9w\" (UID: \"cbf93e33-d1c5-4eff-987d-7563a4bd5e45\") " pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.810984 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tctqj\" (UniqueName: \"kubernetes.io/projected/d19bd61b-ca84-4eb1-aacb-28ef75d7446a-kube-api-access-tctqj\") pod \"machine-approver-56656f9798-26w6l\" (UID: \"d19bd61b-ca84-4eb1-aacb-28ef75d7446a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.831390 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvn76\" (UniqueName: \"kubernetes.io/projected/40549f47-53f3-4990-a2b0-921413ba5862-kube-api-access-zvn76\") pod \"downloads-7954f5f757-sc5ff\" (UID: \"40549f47-53f3-4990-a2b0-921413ba5862\") " pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.843909 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.850128 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.860293 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.861879 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6hwx\" (UniqueName: \"kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx\") pod \"console-f9d7485db-tfxw5\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.872840 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qb7\" (UniqueName: \"kubernetes.io/projected/d29f70a5-3d87-465a-a052-922f9616ac9d-kube-api-access-h5qb7\") pod \"openshift-apiserver-operator-796bbdcf4f-nftpn\" (UID: \"d29f70a5-3d87-465a-a052-922f9616ac9d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.904541 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.904142 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7ql\" (UniqueName: \"kubernetes.io/projected/1796e7d1-9237-4700-ba09-c5f1bd74e457-kube-api-access-5j7ql\") pod \"cluster-image-registry-operator-dc59b4c8b-xb6hm\" (UID: \"1796e7d1-9237-4700-ba09-c5f1bd74e457\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.912533 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.917546 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c5f9\" (UniqueName: \"kubernetes.io/projected/197d7a69-19e6-4c08-b68d-f21073ad7487-kube-api-access-5c5f9\") pod \"console-operator-58897d9998-8d9lm\" (UID: \"197d7a69-19e6-4c08-b68d-f21073ad7487\") " pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.922098 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.928524 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.935089 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.940859 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfmhz\" (UniqueName: \"kubernetes.io/projected/44555695-834e-4ffc-bee2-b16d7adf6fbc-kube-api-access-zfmhz\") pod \"machine-api-operator-5694c8668f-pz8zw\" (UID: \"44555695-834e-4ffc-bee2-b16d7adf6fbc\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.965701 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fkl\" (UniqueName: \"kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl\") pod \"oauth-openshift-558db77b4-xk7s7\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:49 crc kubenswrapper[4847]: I0218 00:27:49.975131 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbwj\" (UniqueName: \"kubernetes.io/projected/30a4cfb1-057e-4d60-a8bd-f9ee95163f73-kube-api-access-rkbwj\") pod \"apiserver-7bbb656c7d-6dmsr\" (UID: \"30a4cfb1-057e-4d60-a8bd-f9ee95163f73\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.021366 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.024448 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.046959 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.055207 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f78l\" (UniqueName: \"kubernetes.io/projected/80ed5db4-b6af-43e0-8d98-8f544e9b6d5e-kube-api-access-7f78l\") pod \"dns-operator-744455d44c-5xpzg\" (UID: \"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e\") " pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.057643 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ca77c22-f027-41f1-a8dd-f40048047f45-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lg2zf\" (UID: \"7ca77c22-f027-41f1-a8dd-f40048047f45\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.062052 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.072315 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.072826 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jqw\" (UniqueName: \"kubernetes.io/projected/595c45e9-e480-4930-b3b6-5075f16629a9-kube-api-access-42jqw\") pod \"machine-config-operator-74547568cd-cz42z\" (UID: \"595c45e9-e480-4930-b3b6-5075f16629a9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.090249 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cp45\" (UniqueName: \"kubernetes.io/projected/6e36bd7a-1a37-44cf-90aa-c8cbb23f7508-kube-api-access-2cp45\") pod \"ingress-operator-5b745b69d9-4v5gj\" (UID: \"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.102674 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d4c9w"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.103002 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.116501 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxwgt\" (UniqueName: \"kubernetes.io/projected/4033b09d-aa99-4b2d-b12f-c5e6f58530f0-kube-api-access-wxwgt\") pod \"multus-admission-controller-857f4d67dd-n6t5r\" (UID: \"4033b09d-aa99-4b2d-b12f-c5e6f58530f0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.123037 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.123093 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.142453 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtkz\" (UniqueName: \"kubernetes.io/projected/b5d0643c-a44f-4323-87a4-f70dc16a4a6b-kube-api-access-thtkz\") pod \"packageserver-d55dfcdfc-t4r74\" (UID: \"b5d0643c-a44f-4323-87a4-f70dc16a4a6b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.158417 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cktp\" (UniqueName: \"kubernetes.io/projected/b902c054-bc7f-41e7-bcb3-bba9f5dc921d-kube-api-access-4cktp\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jzfq\" (UID: \"b902c054-bc7f-41e7-bcb3-bba9f5dc921d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.176595 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.176669 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqs7m\" (UniqueName: \"kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m\") pod \"marketplace-operator-79b997595-hwsk5\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.192402 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.198252 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpmhp\" (UniqueName: \"kubernetes.io/projected/8f65dff0-7fe0-47ec-a0e4-36f6abcffc27-kube-api-access-hpmhp\") pod \"etcd-operator-b45778765-pjq6b\" (UID: \"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.218737 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" event={"ID":"cbf93e33-d1c5-4eff-987d-7563a4bd5e45","Type":"ContainerStarted","Data":"34ef576416e310e1b73752392f878e515bbd3e54adad0a3a60afb4af115aff57"} Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.221032 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bwm5\" (UniqueName: \"kubernetes.io/projected/bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4-kube-api-access-4bwm5\") pod \"router-default-5444994796-hc9j8\" (UID: \"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4\") " pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.221272 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" event={"ID":"d19bd61b-ca84-4eb1-aacb-28ef75d7446a","Type":"ContainerStarted","Data":"01b69cf8ff32b4273e9f843be9819d8c4bcc1612c7b14a4b227a554369c730d6"} Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.251936 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.256046 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.320998 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321064 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-srv-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321084 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321117 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cf65de-e894-4bcd-99b6-bb7642275ed6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321134 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65decaf-2dc6-495b-826b-b36cfa028e48-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321194 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ee1601a-2d54-499e-bbe2-69884e9a0678-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321212 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321228 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cf65de-e894-4bcd-99b6-bb7642275ed6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321243 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee1601a-2d54-499e-bbe2-69884e9a0678-config\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321261 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn7tm\" (UniqueName: \"kubernetes.io/projected/728ae134-78e1-466d-9d53-8709b0a894ef-kube-api-access-xn7tm\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321285 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321314 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728ae134-78e1-466d-9d53-8709b0a894ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321395 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnc8f\" (UniqueName: \"kubernetes.io/projected/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-kube-api-access-qnc8f\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321433 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321450 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf65de-e894-4bcd-99b6-bb7642275ed6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321470 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjvk\" (UniqueName: \"kubernetes.io/projected/f65decaf-2dc6-495b-826b-b36cfa028e48-kube-api-access-tqjvk\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321497 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321558 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpqbt\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321620 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728ae134-78e1-466d-9d53-8709b0a894ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321639 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ee1601a-2d54-499e-bbe2-69884e9a0678-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.321662 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.326811 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:50.826790058 +0000 UTC m=+144.204141000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.328560 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.336353 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.343864 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.356682 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.417485 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425176 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425388 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z69s\" (UniqueName: \"kubernetes.io/projected/f6560bd1-3171-4adb-9a64-2ce644a55abf-kube-api-access-9z69s\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425422 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqwxl\" (UniqueName: \"kubernetes.io/projected/37c05b59-e4e9-41f5-a36b-73c66027b1cc-kube-api-access-jqwxl\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425476 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728ae134-78e1-466d-9d53-8709b0a894ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425507 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-socket-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425626 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnc8f\" (UniqueName: \"kubernetes.io/projected/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-kube-api-access-qnc8f\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425664 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-registration-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425686 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37c05b59-e4e9-41f5-a36b-73c66027b1cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-srv-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425740 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bf8x\" (UniqueName: \"kubernetes.io/projected/c6462fad-745e-4228-acdd-d0f00c2f066d-kube-api-access-2bf8x\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425811 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6462fad-745e-4228-acdd-d0f00c2f066d-config\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425862 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425885 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37c05b59-e4e9-41f5-a36b-73c66027b1cc-proxy-tls\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425923 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf65de-e894-4bcd-99b6-bb7642275ed6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425944 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-plugins-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.425967 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-node-bootstrap-token\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426048 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqjvk\" (UniqueName: \"kubernetes.io/projected/f65decaf-2dc6-495b-826b-b36cfa028e48-kube-api-access-tqjvk\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426079 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7mxs\" (UniqueName: \"kubernetes.io/projected/29ecd22e-1180-4df8-98bc-d36c04c8faf3-kube-api-access-z7mxs\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426106 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-certs\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426181 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426275 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d3ce5d-31e2-4602-9e02-076ee07ace01-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426419 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpqbt\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426453 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-csi-data-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426506 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426526 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxh4b\" (UniqueName: \"kubernetes.io/projected/d1d3ce5d-31e2-4602-9e02-076ee07ace01-kube-api-access-vxh4b\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426568 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728ae134-78e1-466d-9d53-8709b0a894ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426620 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-cabundle\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426643 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-mountpoint-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426706 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ee1601a-2d54-499e-bbe2-69884e9a0678-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426736 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426789 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-key\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426812 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zf2j\" (UniqueName: \"kubernetes.io/projected/9de1b399-56b7-430a-b012-55f7ec14d3ed-kube-api-access-8zf2j\") pod \"migrator-59844c95c7-lw7v7\" (UID: \"9de1b399-56b7-430a-b012-55f7ec14d3ed\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426834 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf79x\" (UniqueName: \"kubernetes.io/projected/2616897a-54d6-46f8-bc52-a4cf07afe350-kube-api-access-rf79x\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426922 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426949 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426969 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfjh\" (UniqueName: \"kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.426992 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zswws\" (UniqueName: \"kubernetes.io/projected/4e454a89-9fab-4b19-9a33-7089da87f5a0-kube-api-access-zswws\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427014 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2616897a-54d6-46f8-bc52-a4cf07afe350-cert\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427067 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-srv-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427132 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6462fad-745e-4228-acdd-d0f00c2f066d-serving-cert\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427180 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427310 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cf65de-e894-4bcd-99b6-bb7642275ed6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427524 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ecd22e-1180-4df8-98bc-d36c04c8faf3-metrics-tls\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427612 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65decaf-2dc6-495b-826b-b36cfa028e48-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427691 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smpp9\" (UniqueName: \"kubernetes.io/projected/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-kube-api-access-smpp9\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427781 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5dn\" (UniqueName: \"kubernetes.io/projected/84ddec40-cc3b-4c50-92eb-d025f1f476d5-kube-api-access-jg5dn\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427802 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wc4g\" (UniqueName: \"kubernetes.io/projected/f25e4583-f904-4b03-bcd3-1aca08f953f7-kube-api-access-4wc4g\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427843 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e454a89-9fab-4b19-9a33-7089da87f5a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427893 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ee1601a-2d54-499e-bbe2-69884e9a0678-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.427999 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428036 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cf65de-e894-4bcd-99b6-bb7642275ed6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.428134 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:50.928099528 +0000 UTC m=+144.305450630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428181 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ecd22e-1180-4df8-98bc-d36c04c8faf3-config-volume\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428218 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee1601a-2d54-499e-bbe2-69884e9a0678-config\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428249 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn7tm\" (UniqueName: \"kubernetes.io/projected/728ae134-78e1-466d-9d53-8709b0a894ef-kube-api-access-xn7tm\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428324 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428498 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.428626 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728ae134-78e1-466d-9d53-8709b0a894ef-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.429365 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf65de-e894-4bcd-99b6-bb7642275ed6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.430079 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.437992 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65decaf-2dc6-495b-826b-b36cfa028e48-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.438283 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60cf65de-e894-4bcd-99b6-bb7642275ed6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.439040 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.439982 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee1601a-2d54-499e-bbe2-69884e9a0678-config\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.440335 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.440787 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.441055 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.445414 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.448677 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-srv-cert\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.450443 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ee1601a-2d54-499e-bbe2-69884e9a0678-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.452342 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728ae134-78e1-466d-9d53-8709b0a894ef-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.457635 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.457682 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.471485 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqjvk\" (UniqueName: \"kubernetes.io/projected/f65decaf-2dc6-495b-826b-b36cfa028e48-kube-api-access-tqjvk\") pod \"package-server-manager-789f6589d5-n6zk7\" (UID: \"f65decaf-2dc6-495b-826b-b36cfa028e48\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.479723 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60cf65de-e894-4bcd-99b6-bb7642275ed6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-sgg8q\" (UID: \"60cf65de-e894-4bcd-99b6-bb7642275ed6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.501568 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531007 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-plugins-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531048 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-node-bootstrap-token\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531073 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7mxs\" (UniqueName: \"kubernetes.io/projected/29ecd22e-1180-4df8-98bc-d36c04c8faf3-kube-api-access-z7mxs\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531094 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-certs\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531122 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d3ce5d-31e2-4602-9e02-076ee07ace01-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531147 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-csi-data-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531165 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxh4b\" (UniqueName: \"kubernetes.io/projected/d1d3ce5d-31e2-4602-9e02-076ee07ace01-kube-api-access-vxh4b\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531188 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-cabundle\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531204 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-mountpoint-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531227 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531274 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-key\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531290 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zf2j\" (UniqueName: \"kubernetes.io/projected/9de1b399-56b7-430a-b012-55f7ec14d3ed-kube-api-access-8zf2j\") pod \"migrator-59844c95c7-lw7v7\" (UID: \"9de1b399-56b7-430a-b012-55f7ec14d3ed\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531309 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf79x\" (UniqueName: \"kubernetes.io/projected/2616897a-54d6-46f8-bc52-a4cf07afe350-kube-api-access-rf79x\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531341 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531359 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtfjh\" (UniqueName: \"kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531376 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zswws\" (UniqueName: \"kubernetes.io/projected/4e454a89-9fab-4b19-9a33-7089da87f5a0-kube-api-access-zswws\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531391 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2616897a-54d6-46f8-bc52-a4cf07afe350-cert\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531409 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6462fad-745e-4228-acdd-d0f00c2f066d-serving-cert\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531435 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ecd22e-1180-4df8-98bc-d36c04c8faf3-metrics-tls\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531464 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smpp9\" (UniqueName: \"kubernetes.io/projected/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-kube-api-access-smpp9\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531483 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg5dn\" (UniqueName: \"kubernetes.io/projected/84ddec40-cc3b-4c50-92eb-d025f1f476d5-kube-api-access-jg5dn\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531499 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wc4g\" (UniqueName: \"kubernetes.io/projected/f25e4583-f904-4b03-bcd3-1aca08f953f7-kube-api-access-4wc4g\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531520 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e454a89-9fab-4b19-9a33-7089da87f5a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531547 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531565 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ecd22e-1180-4df8-98bc-d36c04c8faf3-config-volume\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531618 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z69s\" (UniqueName: \"kubernetes.io/projected/f6560bd1-3171-4adb-9a64-2ce644a55abf-kube-api-access-9z69s\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531638 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqwxl\" (UniqueName: \"kubernetes.io/projected/37c05b59-e4e9-41f5-a36b-73c66027b1cc-kube-api-access-jqwxl\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531658 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-socket-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531683 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-registration-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531699 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37c05b59-e4e9-41f5-a36b-73c66027b1cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531715 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-srv-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531732 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bf8x\" (UniqueName: \"kubernetes.io/projected/c6462fad-745e-4228-acdd-d0f00c2f066d-kube-api-access-2bf8x\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531756 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6462fad-745e-4228-acdd-d0f00c2f066d-config\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.531775 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37c05b59-e4e9-41f5-a36b-73c66027b1cc-proxy-tls\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.535274 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-mountpoint-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.535700 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.035685561 +0000 UTC m=+144.413036503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.536078 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-cabundle\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.536473 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-csi-data-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.538881 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ecd22e-1180-4df8-98bc-d36c04c8faf3-config-volume\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.539245 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.539268 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-plugins-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.540563 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37c05b59-e4e9-41f5-a36b-73c66027b1cc-proxy-tls\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.540909 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-socket-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.541591 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37c05b59-e4e9-41f5-a36b-73c66027b1cc-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.542677 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.542783 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6560bd1-3171-4adb-9a64-2ce644a55abf-registration-dir\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.542904 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6462fad-745e-4228-acdd-d0f00c2f066d-config\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.546375 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-srv-cert\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.546067 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-certs\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.551948 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f25e4583-f904-4b03-bcd3-1aca08f953f7-node-bootstrap-token\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.552445 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1d3ce5d-31e2-4602-9e02-076ee07ace01-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.552958 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2616897a-54d6-46f8-bc52-a4cf07afe350-cert\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.554511 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/29ecd22e-1180-4df8-98bc-d36c04c8faf3-metrics-tls\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.560016 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpqbt\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.560016 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnc8f\" (UniqueName: \"kubernetes.io/projected/0aed4c6f-08ce-4dc7-ae2a-efb45adc0844-kube-api-access-qnc8f\") pod \"catalog-operator-68c6474976-g7cgq\" (UID: \"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.563015 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6462fad-745e-4228-acdd-d0f00c2f066d-serving-cert\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.572757 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn7tm\" (UniqueName: \"kubernetes.io/projected/728ae134-78e1-466d-9d53-8709b0a894ef-kube-api-access-xn7tm\") pod \"openshift-controller-manager-operator-756b6f6bc6-p5gdf\" (UID: \"728ae134-78e1-466d-9d53-8709b0a894ef\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.578283 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.580015 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4e454a89-9fab-4b19-9a33-7089da87f5a0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.601170 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.623027 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ds8pk"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.623498 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ee1601a-2d54-499e-bbe2-69884e9a0678-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6kdbm\" (UID: \"3ee1601a-2d54-499e-bbe2-69884e9a0678\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.625710 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/84ddec40-cc3b-4c50-92eb-d025f1f476d5-signing-key\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.625914 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.633188 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.633502 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.133437995 +0000 UTC m=+144.510788937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.634822 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.635509 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.135488615 +0000 UTC m=+144.512839557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.658473 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.665576 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zswws\" (UniqueName: \"kubernetes.io/projected/4e454a89-9fab-4b19-9a33-7089da87f5a0-kube-api-access-zswws\") pod \"control-plane-machine-set-operator-78cbb6b69f-qtzlv\" (UID: \"4e454a89-9fab-4b19-9a33-7089da87f5a0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.668834 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-sc5ff"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.675195 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7mxs\" (UniqueName: \"kubernetes.io/projected/29ecd22e-1180-4df8-98bc-d36c04c8faf3-kube-api-access-z7mxs\") pod \"dns-default-n67v9\" (UID: \"29ecd22e-1180-4df8-98bc-d36c04c8faf3\") " pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.683367 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.708199 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf79x\" (UniqueName: \"kubernetes.io/projected/2616897a-54d6-46f8-bc52-a4cf07afe350-kube-api-access-rf79x\") pod \"ingress-canary-4hc85\" (UID: \"2616897a-54d6-46f8-bc52-a4cf07afe350\") " pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.712259 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.713958 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.715857 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.723207 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtfjh\" (UniqueName: \"kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh\") pod \"collect-profiles-29522895-kpcdn\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.735493 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.735887 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.235851842 +0000 UTC m=+144.613202784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.736184 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.736858 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.236850396 +0000 UTC m=+144.614201338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.743316 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.775182 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.778958 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.779184 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-98mw7"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.784782 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.798196 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6t5r"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.801892 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z69s\" (UniqueName: \"kubernetes.io/projected/f6560bd1-3171-4adb-9a64-2ce644a55abf-kube-api-access-9z69s\") pod \"csi-hostpathplugin-j66wn\" (UID: \"f6560bd1-3171-4adb-9a64-2ce644a55abf\") " pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.803114 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4hc85" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.804359 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zf2j\" (UniqueName: \"kubernetes.io/projected/9de1b399-56b7-430a-b012-55f7ec14d3ed-kube-api-access-8zf2j\") pod \"migrator-59844c95c7-lw7v7\" (UID: \"9de1b399-56b7-430a-b012-55f7ec14d3ed\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.811206 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxh4b\" (UniqueName: \"kubernetes.io/projected/d1d3ce5d-31e2-4602-9e02-076ee07ace01-kube-api-access-vxh4b\") pod \"cluster-samples-operator-665b6dd947-6hrpj\" (UID: \"d1d3ce5d-31e2-4602-9e02-076ee07ace01\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.811502 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.811570 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.812521 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqwxl\" (UniqueName: \"kubernetes.io/projected/37c05b59-e4e9-41f5-a36b-73c66027b1cc-kube-api-access-jqwxl\") pod \"machine-config-controller-84d6567774-rj7hf\" (UID: \"37c05b59-e4e9-41f5-a36b-73c66027b1cc\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.818850 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bf8x\" (UniqueName: \"kubernetes.io/projected/c6462fad-745e-4228-acdd-d0f00c2f066d-kube-api-access-2bf8x\") pod \"service-ca-operator-777779d784-hwnbk\" (UID: \"c6462fad-745e-4228-acdd-d0f00c2f066d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.840986 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.841340 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.841721 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.341706053 +0000 UTC m=+144.719056995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.841815 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.842142 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.860394 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wc4g\" (UniqueName: \"kubernetes.io/projected/f25e4583-f904-4b03-bcd3-1aca08f953f7-kube-api-access-4wc4g\") pod \"machine-config-server-4bt25\" (UID: \"f25e4583-f904-4b03-bcd3-1aca08f953f7\") " pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.865748 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg5dn\" (UniqueName: \"kubernetes.io/projected/84ddec40-cc3b-4c50-92eb-d025f1f476d5-kube-api-access-jg5dn\") pod \"service-ca-9c57cc56f-mbftk\" (UID: \"84ddec40-cc3b-4c50-92eb-d025f1f476d5\") " pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.867241 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smpp9\" (UniqueName: \"kubernetes.io/projected/8065a5b7-ace7-4dfd-baff-f4d40fe197ab-kube-api-access-smpp9\") pod \"olm-operator-6b444d44fb-qvh8m\" (UID: \"8065a5b7-ace7-4dfd-baff-f4d40fe197ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.904511 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.922044 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pz8zw"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.939046 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.939100 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8d9lm"] Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.942488 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:50 crc kubenswrapper[4847]: E0218 00:27:50.942986 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.442971232 +0000 UTC m=+144.820322174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:50 crc kubenswrapper[4847]: I0218 00:27:50.955064 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pjq6b"] Feb 18 00:27:50 crc kubenswrapper[4847]: W0218 00:27:50.955223 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30a4cfb1_057e_4d60_a8bd_f9ee95163f73.slice/crio-56fdeeb53cf2b8790578c0a05a2674277f69bde37d4ea33cf747e045e766fab0 WatchSource:0}: Error finding container 56fdeeb53cf2b8790578c0a05a2674277f69bde37d4ea33cf747e045e766fab0: Status 404 returned error can't find the container with id 56fdeeb53cf2b8790578c0a05a2674277f69bde37d4ea33cf747e045e766fab0 Feb 18 00:27:50 crc kubenswrapper[4847]: W0218 00:27:50.992293 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e36bd7a_1a37_44cf_90aa_c8cbb23f7508.slice/crio-2ba3b938b954dabb08def39e9d2c141559e5745111df7b6566fb8de58310bf30 WatchSource:0}: Error finding container 2ba3b938b954dabb08def39e9d2c141559e5745111df7b6566fb8de58310bf30: Status 404 returned error can't find the container with id 2ba3b938b954dabb08def39e9d2c141559e5745111df7b6566fb8de58310bf30 Feb 18 00:27:50 crc kubenswrapper[4847]: W0218 00:27:50.995864 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod197d7a69_19e6_4c08_b68d_f21073ad7487.slice/crio-5e7b0bbe87d3f2b6403d9c619b8e9367152955a9041d4922876fa67c8bbc8a47 WatchSource:0}: Error finding container 5e7b0bbe87d3f2b6403d9c619b8e9367152955a9041d4922876fa67c8bbc8a47: Status 404 returned error can't find the container with id 5e7b0bbe87d3f2b6403d9c619b8e9367152955a9041d4922876fa67c8bbc8a47 Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.030740 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.040917 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.043712 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.045831 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.545804319 +0000 UTC m=+144.923155261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.046805 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.051898 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.052403 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.552387129 +0000 UTC m=+144.929738071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.058491 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.063302 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.091549 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.095155 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.097082 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74"] Feb 18 00:27:51 crc kubenswrapper[4847]: W0218 00:27:51.147052 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5d0643c_a44f_4323_87a4_f70dc16a4a6b.slice/crio-fcea4fcd12c795e7c2d0769ff065fab26a55b18659f0ccdb6c95c6dd7f4297f2 WatchSource:0}: Error finding container fcea4fcd12c795e7c2d0769ff065fab26a55b18659f0ccdb6c95c6dd7f4297f2: Status 404 returned error can't find the container with id fcea4fcd12c795e7c2d0769ff065fab26a55b18659f0ccdb6c95c6dd7f4297f2 Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.148673 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4bt25" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.152814 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.153287 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.653272619 +0000 UTC m=+145.030623561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.155437 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.166337 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.168747 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.205838 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5xpzg"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.260054 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" event={"ID":"d29f70a5-3d87-465a-a052-922f9616ac9d","Type":"ContainerStarted","Data":"3aa908c61d5e4fa64f61d712900011b54e9687c5685e22e7582c16ea9b18e47e"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.267346 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.267893 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.767877792 +0000 UTC m=+145.145228734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.290286 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" event={"ID":"d19bd61b-ca84-4eb1-aacb-28ef75d7446a","Type":"ContainerStarted","Data":"eb58a8927277d655633da1bc53bc127bf1d751b19809577462c562af7da63e65"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.294509 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" event={"ID":"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27","Type":"ContainerStarted","Data":"3b90303b5d523c8fed0a4e80b0c860debf5641044ebc2d3d619001b06ea3eda0"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.299709 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" event={"ID":"7ca77c22-f027-41f1-a8dd-f40048047f45","Type":"ContainerStarted","Data":"8d330884f8cb60388842b07d0d494b20785f61804e19227f559d69c2f85e722b"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.305524 4847 generic.go:334] "Generic (PLEG): container finished" podID="cbf93e33-d1c5-4eff-987d-7563a4bd5e45" containerID="198990db5afc3c26afde0d6bf40c191bd8c25267d7e6f61fdd70ca21890dd909" exitCode=0 Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.305697 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" event={"ID":"cbf93e33-d1c5-4eff-987d-7563a4bd5e45","Type":"ContainerDied","Data":"198990db5afc3c26afde0d6bf40c191bd8c25267d7e6f61fdd70ca21890dd909"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.307699 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sc5ff" event={"ID":"40549f47-53f3-4990-a2b0-921413ba5862","Type":"ContainerStarted","Data":"077e7f0302a75a90be145288845cedfdfec0483241d40f952ec65d9a80db50de"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.314923 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hc9j8" event={"ID":"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4","Type":"ContainerStarted","Data":"0e09715ec0a41b5f527cda50a370ba96e8346438670691bdf5918c6adf274a7d"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.314974 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hc9j8" event={"ID":"bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4","Type":"ContainerStarted","Data":"cdcdaf1976b52cd0c6f21e5d4957f060ec2388e82de6c9a7d2381d467001132d"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.317846 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" event={"ID":"1796e7d1-9237-4700-ba09-c5f1bd74e457","Type":"ContainerStarted","Data":"c4618f4622bc9c3ca38668e3ab194a7abeba6d9c6ebdae2de2a089ff8426f53a"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.320584 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" event={"ID":"97682d07-0505-453d-afc6-2d9c8dfc4638","Type":"ContainerStarted","Data":"92850189f11ca123cae7ba1596bbe04381270046035ce6d4fecdac1b9b9c8fe4"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.325473 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" event={"ID":"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c","Type":"ContainerStarted","Data":"720e421c5e1d3176ba34a5df13ee796366d1ca5dbc664fb3e722ce016361237d"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.330330 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.332749 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-98mw7" event={"ID":"17d9fff8-b1cd-4124-8dc8-607c81e15c21","Type":"ContainerStarted","Data":"9a305ca93470f9dcd360556a4e4b17d7fd40b9820c4506c8eec2c34445308bce"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.336091 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tfxw5" event={"ID":"b4d13f62-c469-4050-8974-8ccf32bf0bce","Type":"ContainerStarted","Data":"bae068a4e4bb7fc552f27d8d23090f2bc1a1640c3c5e533a7574fd19bbfab549"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.340969 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" event={"ID":"197d7a69-19e6-4c08-b68d-f21073ad7487","Type":"ContainerStarted","Data":"5e7b0bbe87d3f2b6403d9c619b8e9367152955a9041d4922876fa67c8bbc8a47"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.348804 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" event={"ID":"daaf1919-f9da-4151-8932-4c77a478b531","Type":"ContainerStarted","Data":"8235e8bbb9dfcd59e6787a97d10436309e58c0c97b7f3fd0fe8befd5f2ca7240"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.361954 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" event={"ID":"464f104b-7665-4b2c-a507-81b166174685","Type":"ContainerStarted","Data":"ffeb563488f3ac19adb870a29390b14bae2910d94dc77b2fb06e32cd9154cf2c"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.362080 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.362097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" event={"ID":"464f104b-7665-4b2c-a507-81b166174685","Type":"ContainerStarted","Data":"d7e4ede30a89a71c3cc222778589bf2605ca10adbfedfaeff09f9b9e5a4e9eaa"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.366035 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:51 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:51 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:51 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.366081 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.367511 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" event={"ID":"7f02ce80-0362-4208-bfcf-3f68956dd6f2","Type":"ContainerStarted","Data":"af2bbfebc034b59ac05121de5d2e328924131d1b3f2ee73f8525adcc2f73a5a3"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.370233 4847 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-tmbbz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.370298 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" podUID="464f104b-7665-4b2c-a507-81b166174685" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.383762 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" event={"ID":"44555695-834e-4ffc-bee2-b16d7adf6fbc","Type":"ContainerStarted","Data":"74175aa4838e59285fc30e4b35711ef2820bad508062985adc551cba25286941"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.389740 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.391162 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.891136725 +0000 UTC m=+145.268487667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.400938 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" event={"ID":"30a4cfb1-057e-4d60-a8bd-f9ee95163f73","Type":"ContainerStarted","Data":"56fdeeb53cf2b8790578c0a05a2674277f69bde37d4ea33cf747e045e766fab0"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.450892 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" event={"ID":"b5d0643c-a44f-4323-87a4-f70dc16a4a6b","Type":"ContainerStarted","Data":"fcea4fcd12c795e7c2d0769ff065fab26a55b18659f0ccdb6c95c6dd7f4297f2"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.450949 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" event={"ID":"595c45e9-e480-4930-b3b6-5075f16629a9","Type":"ContainerStarted","Data":"509d12a3da2bfc875d25c5463530ec9c4e45c281bd8018fa0d95d532c70351d0"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.455822 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" event={"ID":"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508","Type":"ContainerStarted","Data":"2ba3b938b954dabb08def39e9d2c141559e5745111df7b6566fb8de58310bf30"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.462966 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" event={"ID":"4033b09d-aa99-4b2d-b12f-c5e6f58530f0","Type":"ContainerStarted","Data":"ae9da877cabecfe29d7b1e9ac86161487e000896cdffa3c1236a894a4a85cf86"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.477831 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" event={"ID":"78d08277-0a0a-4e0a-ab40-803bfdd76e29","Type":"ContainerStarted","Data":"e692c229fa1cffc22b7c4b55c72f3527c30793a8ae89f29d604239e69a72ab2b"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.477887 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" event={"ID":"78d08277-0a0a-4e0a-ab40-803bfdd76e29","Type":"ContainerStarted","Data":"79e09f2b731bd96d65be88520fbdd8385a86a9d76f46d91c21e6bde053a8a87a"} Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.478379 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.481992 4847 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-c7dv2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.482154 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.491231 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.492134 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:51.991732578 +0000 UTC m=+145.369083520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.592788 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.593380 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.093360466 +0000 UTC m=+145.470711408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.695450 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.695977 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.195952708 +0000 UTC m=+145.573303640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.708349 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4hc85"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.736568 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-j66wn"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.796367 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.796671 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.296639903 +0000 UTC m=+145.673990845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.797928 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.798449 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.298433566 +0000 UTC m=+145.675784508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.798703 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-n67v9"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.823186 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm"] Feb 18 00:27:51 crc kubenswrapper[4847]: W0218 00:27:51.823205 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2616897a_54d6_46f8_bc52_a4cf07afe350.slice/crio-d9b43e3877ff8b79f9ffd7c2df8b69bbef27afe2c742749fd93647a7368d3bbb WatchSource:0}: Error finding container d9b43e3877ff8b79f9ffd7c2df8b69bbef27afe2c742749fd93647a7368d3bbb: Status 404 returned error can't find the container with id d9b43e3877ff8b79f9ffd7c2df8b69bbef27afe2c742749fd93647a7368d3bbb Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.900307 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:51 crc kubenswrapper[4847]: E0218 00:27:51.901353 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.401329085 +0000 UTC m=+145.778680027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.935267 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mbftk"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.938735 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.946640 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.948697 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq"] Feb 18 00:27:51 crc kubenswrapper[4847]: I0218 00:27:51.951947 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.004908 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.005241 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.505227778 +0000 UTC m=+145.882578720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: W0218 00:27:52.042573 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25e4583_f904_4b03_bcd3_1aca08f953f7.slice/crio-0521c287fe53e950a7324eb0320162f8d91a626855fabe76e63a60d0e29fe8f1 WatchSource:0}: Error finding container 0521c287fe53e950a7324eb0320162f8d91a626855fabe76e63a60d0e29fe8f1: Status 404 returned error can't find the container with id 0521c287fe53e950a7324eb0320162f8d91a626855fabe76e63a60d0e29fe8f1 Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.109650 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.111008 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.610983096 +0000 UTC m=+145.988334038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.212673 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.213067 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.713052214 +0000 UTC m=+146.090403156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: W0218 00:27:52.255304 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aed4c6f_08ce_4dc7_ae2a_efb45adc0844.slice/crio-79f224c2ad9fae6ee28bfa348b52d7d088b2b647059487e065bfa8e1cc24f391 WatchSource:0}: Error finding container 79f224c2ad9fae6ee28bfa348b52d7d088b2b647059487e065bfa8e1cc24f391: Status 404 returned error can't find the container with id 79f224c2ad9fae6ee28bfa348b52d7d088b2b647059487e065bfa8e1cc24f391 Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.321112 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.321500 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.821482798 +0000 UTC m=+146.198833740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.333115 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" podStartSLOduration=125.3330976 podStartE2EDuration="2m5.3330976s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.332223628 +0000 UTC m=+145.709574570" watchObservedRunningTime="2026-02-18 00:27:52.3330976 +0000 UTC m=+145.710448542" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.344308 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:52 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:52 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:52 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.344360 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.391002 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.391895 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" podStartSLOduration=125.391881927 podStartE2EDuration="2m5.391881927s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.381020373 +0000 UTC m=+145.758371315" watchObservedRunningTime="2026-02-18 00:27:52.391881927 +0000 UTC m=+145.769232869" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.395472 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.422650 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.422664 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hc9j8" podStartSLOduration=125.422641704 podStartE2EDuration="2m5.422641704s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.421158778 +0000 UTC m=+145.798509710" watchObservedRunningTime="2026-02-18 00:27:52.422641704 +0000 UTC m=+145.799992646" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.423120 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:52.923103825 +0000 UTC m=+146.300454767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.527332 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.527726 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.027705836 +0000 UTC m=+146.405056778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.539763 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.553051 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.571293 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" event={"ID":"3ee1601a-2d54-499e-bbe2-69884e9a0678","Type":"ContainerStarted","Data":"ed331028ed39c0462c292fbd91d685800ee7a0621355c54c366c1f7539022832"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.574176 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" event={"ID":"b902c054-bc7f-41e7-bcb3-bba9f5dc921d","Type":"ContainerStarted","Data":"1f615c49e3ef0d6c55eacb06d3c1f99b1487b84d566a67fe8e6a078b5d8e8eb3"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.577993 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" event={"ID":"f65decaf-2dc6-495b-826b-b36cfa028e48","Type":"ContainerStarted","Data":"64280bf8f81e3ca29931bd886de4b22dc4c12922631b8201a66af85c6c2b31e3"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.589732 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" event={"ID":"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508","Type":"ContainerStarted","Data":"983c097a018633485467e9443349a82d88b7d3723e2680bf40ccc39040558e1e"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.628231 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.628587 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.128574645 +0000 UTC m=+146.505925587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.635813 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tfxw5" event={"ID":"b4d13f62-c469-4050-8974-8ccf32bf0bce","Type":"ContainerStarted","Data":"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.637923 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-98mw7" event={"ID":"17d9fff8-b1cd-4124-8dc8-607c81e15c21","Type":"ContainerStarted","Data":"7b9358a8433df226df2c3c3dc77c192ccc6699953b493d6c4c1d0f833d3a5e8b"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.643190 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" event={"ID":"b73a0e0b-a65a-4985-b23e-40e2334a47e3","Type":"ContainerStarted","Data":"30dab2b1cc01cbe5f4b7e230e51567f02c5e3ef157e9dbc6bf75d1cd2f2d13cd"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.676700 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" event={"ID":"d19bd61b-ca84-4eb1-aacb-28ef75d7446a","Type":"ContainerStarted","Data":"235887008d2901163b7144d1a20ae99767ce53aeea3b423738950b664a78a58c"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.683879 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" event={"ID":"595c45e9-e480-4930-b3b6-5075f16629a9","Type":"ContainerStarted","Data":"94084f834f97b7cc74489ddb3e4a37b828cf247b0ff48d970660f32421beaa8e"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.690733 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" event={"ID":"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e","Type":"ContainerStarted","Data":"6e7ce5063d45842724e327d1685e8b56d4f9f65c27b26a034b958d40bd2546a0"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.693882 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" event={"ID":"197d7a69-19e6-4c08-b68d-f21073ad7487","Type":"ContainerStarted","Data":"7ff46abda9a3f8acb58db4f9320dc741c0d01c1718e5362e0e57df496eb2fe5a"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.706415 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.719425 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-tfxw5" podStartSLOduration=125.719400281 podStartE2EDuration="2m5.719400281s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.674455239 +0000 UTC m=+146.051806181" watchObservedRunningTime="2026-02-18 00:27:52.719400281 +0000 UTC m=+146.096751233" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.726589 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf"] Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.731779 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.733136 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.233117374 +0000 UTC m=+146.610468316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.735934 4847 patch_prober.go:28] interesting pod/console-operator-58897d9998-8d9lm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.736532 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" podUID="197d7a69-19e6-4c08-b68d-f21073ad7487" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.736631 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-sc5ff" event={"ID":"40549f47-53f3-4990-a2b0-921413ba5862","Type":"ContainerStarted","Data":"20f78f9db8b1f9cf222c6d31bff00c240894a255aca6a19b6520bf846fb5645e"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.737122 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.751468 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29522880-98mw7" podStartSLOduration=125.751433499 podStartE2EDuration="2m5.751433499s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.737697335 +0000 UTC m=+146.115048277" watchObservedRunningTime="2026-02-18 00:27:52.751433499 +0000 UTC m=+146.128784441" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.762822 4847 patch_prober.go:28] interesting pod/downloads-7954f5f757-sc5ff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.762898 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sc5ff" podUID="40549f47-53f3-4990-a2b0-921413ba5862" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.770107 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" podStartSLOduration=125.770085282 podStartE2EDuration="2m5.770085282s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.769218031 +0000 UTC m=+146.146568963" watchObservedRunningTime="2026-02-18 00:27:52.770085282 +0000 UTC m=+146.147436224" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.774126 4847 csr.go:261] certificate signing request csr-b9dlj is approved, waiting to be issued Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.774142 4847 csr.go:257] certificate signing request csr-b9dlj is issued Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.778918 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" event={"ID":"4e454a89-9fab-4b19-9a33-7089da87f5a0","Type":"ContainerStarted","Data":"1e3763ed874e5293b7f2425cf11a375fa450e4a48d52c17c3412fdee8989e8ee"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.814555 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-sc5ff" podStartSLOduration=125.814540451 podStartE2EDuration="2m5.814540451s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.813316722 +0000 UTC m=+146.190667664" watchObservedRunningTime="2026-02-18 00:27:52.814540451 +0000 UTC m=+146.191891393" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.822815 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" event={"ID":"7ca77c22-f027-41f1-a8dd-f40048047f45","Type":"ContainerStarted","Data":"b1f78e3b4e8b4207e47bf00c09e3322b0c82c1b5e65ac381063d80b90e9028d4"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.829454 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4bt25" event={"ID":"f25e4583-f904-4b03-bcd3-1aca08f953f7","Type":"ContainerStarted","Data":"0521c287fe53e950a7324eb0320162f8d91a626855fabe76e63a60d0e29fe8f1"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.838640 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.842743 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.342730846 +0000 UTC m=+146.720081788 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.858874 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-26w6l" podStartSLOduration=125.858854218 podStartE2EDuration="2m5.858854218s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.857565686 +0000 UTC m=+146.234916628" watchObservedRunningTime="2026-02-18 00:27:52.858854218 +0000 UTC m=+146.236205160" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.917576 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lg2zf" podStartSLOduration=125.917552153 podStartE2EDuration="2m5.917552153s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:52.917269516 +0000 UTC m=+146.294620458" watchObservedRunningTime="2026-02-18 00:27:52.917552153 +0000 UTC m=+146.294903095" Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.921503 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" event={"ID":"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844","Type":"ContainerStarted","Data":"79f224c2ad9fae6ee28bfa348b52d7d088b2b647059487e065bfa8e1cc24f391"} Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.940389 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.940496 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.44047877 +0000 UTC m=+146.817829712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:52 crc kubenswrapper[4847]: I0218 00:27:52.940663 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:52 crc kubenswrapper[4847]: E0218 00:27:52.940952 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.440944881 +0000 UTC m=+146.818295823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.004773 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" event={"ID":"97682d07-0505-453d-afc6-2d9c8dfc4638","Type":"ContainerStarted","Data":"4c47a2be640df04437068cc417e9502e6c1b7024d12e6d1ebdde25a7575227fa"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.013777 4847 generic.go:334] "Generic (PLEG): container finished" podID="7f02ce80-0362-4208-bfcf-3f68956dd6f2" containerID="c959f3be48085622349b60b094489276ee7610989ec8333a888039eeff36e10a" exitCode=0 Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.013853 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" event={"ID":"7f02ce80-0362-4208-bfcf-3f68956dd6f2","Type":"ContainerDied","Data":"c959f3be48085622349b60b094489276ee7610989ec8333a888039eeff36e10a"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.019169 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" event={"ID":"728ae134-78e1-466d-9d53-8709b0a894ef","Type":"ContainerStarted","Data":"078aece0a9542ba7b8a7227edbef5ed9cdf0f560cfcbf8aebef251fc9cc79234"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.039593 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4hc85" event={"ID":"2616897a-54d6-46f8-bc52-a4cf07afe350","Type":"ContainerStarted","Data":"d9b43e3877ff8b79f9ffd7c2df8b69bbef27afe2c742749fd93647a7368d3bbb"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.041414 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" event={"ID":"84ddec40-cc3b-4c50-92eb-d025f1f476d5","Type":"ContainerStarted","Data":"4dc5a1d524ac787dff286dd6ad266cf6618b60346b156787596d77220985a343"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.042159 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.042678 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.542659311 +0000 UTC m=+146.920010253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.043167 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.052037 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.552015088 +0000 UTC m=+146.929366030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.054827 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.073670 4847 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-t4r74 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.073743 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" podUID="b5d0643c-a44f-4323-87a4-f70dc16a4a6b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.077392 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" event={"ID":"f6560bd1-3171-4adb-9a64-2ce644a55abf","Type":"ContainerStarted","Data":"665f7c68f5e7a3c7addcca5fa4743d71159c12633312119c5255ad3a2e543706"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.079978 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n67v9" event={"ID":"29ecd22e-1180-4df8-98bc-d36c04c8faf3","Type":"ContainerStarted","Data":"b0a3a6f549680e37781dbf7b4b275d9f4aacb4e7f7ae084112b1e487b01df997"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.093506 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ds8pk" podStartSLOduration=126.093480355 podStartE2EDuration="2m6.093480355s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:53.03889208 +0000 UTC m=+146.416243032" watchObservedRunningTime="2026-02-18 00:27:53.093480355 +0000 UTC m=+146.470831297" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.121197 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" podStartSLOduration=126.121173168 podStartE2EDuration="2m6.121173168s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:53.119691262 +0000 UTC m=+146.497042194" watchObservedRunningTime="2026-02-18 00:27:53.121173168 +0000 UTC m=+146.498524110" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.141306 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" event={"ID":"1796e7d1-9237-4700-ba09-c5f1bd74e457","Type":"ContainerStarted","Data":"9d2fe504b5041c6ce77654bee3854fd7ba98896b87c259945613abc8ab1b281c"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.143729 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.146371 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.646340369 +0000 UTC m=+147.023691311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.150815 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" event={"ID":"44555695-834e-4ffc-bee2-b16d7adf6fbc","Type":"ContainerStarted","Data":"db98d05eb0361d79b676aafee5ea62bb11e79d4ff07875eef6d44d490b161785"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.166898 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" podStartSLOduration=126.166876148 podStartE2EDuration="2m6.166876148s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:53.163111646 +0000 UTC m=+146.540462588" watchObservedRunningTime="2026-02-18 00:27:53.166876148 +0000 UTC m=+146.544227090" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.187060 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" event={"ID":"d29f70a5-3d87-465a-a052-922f9616ac9d","Type":"ContainerStarted","Data":"8159bf8401084f9fefbfd58ed95f14a1c4dafe26300387a5c36a33f0a769a9ba"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.197615 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" event={"ID":"60cf65de-e894-4bcd-99b6-bb7642275ed6","Type":"ContainerStarted","Data":"1da31fc9458c29cd8dd532bbf76aafaaa033f48a9850505c3544795b656db2b7"} Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.207902 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.212939 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.216335 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xb6hm" podStartSLOduration=126.216321519 podStartE2EDuration="2m6.216321519s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:53.213310426 +0000 UTC m=+146.590661358" watchObservedRunningTime="2026-02-18 00:27:53.216321519 +0000 UTC m=+146.593672461" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.245710 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.246521 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.746503772 +0000 UTC m=+147.123854714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.328957 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nftpn" podStartSLOduration=126.328906133 podStartE2EDuration="2m6.328906133s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:53.325994122 +0000 UTC m=+146.703345064" watchObservedRunningTime="2026-02-18 00:27:53.328906133 +0000 UTC m=+146.706257075" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.346395 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.347170 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.847152796 +0000 UTC m=+147.224503738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.349765 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:53 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:53 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:53 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.349804 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.458676 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.459450 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:53.959437903 +0000 UTC m=+147.336788845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.503413 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.503975 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.563843 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.564107 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.064088144 +0000 UTC m=+147.441439086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.665886 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.667098 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.167062345 +0000 UTC m=+147.544413287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.776787 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-18 00:22:52 +0000 UTC, rotation deadline is 2026-11-18 22:02:51.65283239 +0000 UTC Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.776852 4847 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6573h34m57.875983219s for next certificate rotation Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.777542 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.778096 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.278071331 +0000 UTC m=+147.655422273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.891710 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.892223 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.392205122 +0000 UTC m=+147.769556064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:53 crc kubenswrapper[4847]: I0218 00:27:53.994032 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:53 crc kubenswrapper[4847]: E0218 00:27:53.998836 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.498803831 +0000 UTC m=+147.876154773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.001259 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.001838 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.501823015 +0000 UTC m=+147.879173947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.111871 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.112215 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.612183765 +0000 UTC m=+147.989534707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.112582 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.112994 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.612979214 +0000 UTC m=+147.990330156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.214523 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.215135 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.715111694 +0000 UTC m=+148.092462636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.244358 4847 generic.go:334] "Generic (PLEG): container finished" podID="30a4cfb1-057e-4d60-a8bd-f9ee95163f73" containerID="2b66fda6ed2303deeace468cb0ab989e6a216666ea946b88f3b1f16be195694c" exitCode=0 Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.244465 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" event={"ID":"30a4cfb1-057e-4d60-a8bd-f9ee95163f73","Type":"ContainerDied","Data":"2b66fda6ed2303deeace468cb0ab989e6a216666ea946b88f3b1f16be195694c"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.273240 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" event={"ID":"daaf1919-f9da-4151-8932-4c77a478b531","Type":"ContainerStarted","Data":"34afd9253b44d482a3989efcbcdab02562d255f656cc1aeeb56b685568c1089a"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.274184 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.282251 4847 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hwsk5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.282728 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.290298 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" event={"ID":"8f65dff0-7fe0-47ec-a0e4-36f6abcffc27","Type":"ContainerStarted","Data":"9393233e4cbceaa7b0e4776fa841c17e23628a55c65a11530281bf1d5f53504c"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.303932 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" event={"ID":"cbf93e33-d1c5-4eff-987d-7563a4bd5e45","Type":"ContainerStarted","Data":"97ec0784a4985512c6a7d445492e94a714ea58316333007a67783f6b7eb80c61"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.320551 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.322987 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.822968553 +0000 UTC m=+148.200319495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.327806 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" event={"ID":"8065a5b7-ace7-4dfd-baff-f4d40fe197ab","Type":"ContainerStarted","Data":"7d304a47ed20fd4bf4ed8dec9ed187099db667d967bf1e6afedc49256e33ac4c"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.339873 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:54 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:54 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:54 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.339926 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.347563 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" event={"ID":"b5d0643c-a44f-4323-87a4-f70dc16a4a6b","Type":"ContainerStarted","Data":"93b1c51d34a857a4d088ae2144569997ba74970fe2bebf910a1bd050333d2baa"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.405242 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" event={"ID":"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c","Type":"ContainerStarted","Data":"96a495bf2c9adb1962cd35cf5fe155423f82659af87ceee478f2ff2291b7bd6f"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.406093 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.422811 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.423084 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.423183 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.423264 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.423321 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.438157 4847 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xk7s7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.438217 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.440040 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:54.940021826 +0000 UTC m=+148.317372768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.453584 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.454087 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.454464 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.457420 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" event={"ID":"0aed4c6f-08ce-4dc7-ae2a-efb45adc0844","Type":"ContainerStarted","Data":"04f09ac80bd19d91f8781373ac9706148ef7d06882a4ee4e4e77da8df39e10e0"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.458366 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.471806 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" podStartSLOduration=127.471782877 podStartE2EDuration="2m7.471782877s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.405465957 +0000 UTC m=+147.782816899" watchObservedRunningTime="2026-02-18 00:27:54.471782877 +0000 UTC m=+147.849133819" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.473435 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" event={"ID":"60cf65de-e894-4bcd-99b6-bb7642275ed6","Type":"ContainerStarted","Data":"e977beeacd76639053f12fb0dbc204586ee4b289c37661504afbaaa6852fc12f"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.510395 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" event={"ID":"d1d3ce5d-31e2-4602-9e02-076ee07ace01","Type":"ContainerStarted","Data":"2196b7339a73fa284c6ae68f2d6ab51d62750bc1062e4460285d1bc314f37af7"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.532245 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pjq6b" podStartSLOduration=127.532228045 podStartE2EDuration="2m7.532228045s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.476104282 +0000 UTC m=+147.853455234" watchObservedRunningTime="2026-02-18 00:27:54.532228045 +0000 UTC m=+147.909578987" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.534026 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.534476 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.536681 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.036667443 +0000 UTC m=+148.414018385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.564264 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" event={"ID":"595c45e9-e480-4930-b3b6-5075f16629a9","Type":"ContainerStarted","Data":"1325c763185e088b62fc56a349f236dcfb0edfbc3e936b46e2cfafbbe25faa40"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.567957 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" event={"ID":"4e454a89-9fab-4b19-9a33-7089da87f5a0","Type":"ContainerStarted","Data":"1355b57931cb9892f7f043ab619485d6ad97c97a6a1b24a9ac0af4f3adde749b"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.572131 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" event={"ID":"84ddec40-cc3b-4c50-92eb-d025f1f476d5","Type":"ContainerStarted","Data":"600146f29e069c8bf93f0b81b51c293f457080b10717cd8809b0aed2f620aed1"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.573867 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-sgg8q" podStartSLOduration=127.573852516 podStartE2EDuration="2m7.573852516s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.573054227 +0000 UTC m=+147.950405169" watchObservedRunningTime="2026-02-18 00:27:54.573852516 +0000 UTC m=+147.951203458" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.574501 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" podStartSLOduration=127.574494952 podStartE2EDuration="2m7.574494952s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.533575628 +0000 UTC m=+147.910926570" watchObservedRunningTime="2026-02-18 00:27:54.574494952 +0000 UTC m=+147.951845894" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.601884 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" event={"ID":"b902c054-bc7f-41e7-bcb3-bba9f5dc921d","Type":"ContainerStarted","Data":"948fc779fe5186004ed8c9d44c4f08d33095c4d6c6571fbe7c5da1f8d67f09da"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.647974 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.649014 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" event={"ID":"b73a0e0b-a65a-4985-b23e-40e2334a47e3","Type":"ContainerStarted","Data":"ea870ac0bd7b79a0d8414450976e8563df25edd64f61ba52836c0d013b2c6864"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.649239 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.649590 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.149563465 +0000 UTC m=+148.526914397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.649901 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.664892 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.164838596 +0000 UTC m=+148.542189538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.678097 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" podStartSLOduration=127.678071457 podStartE2EDuration="2m7.678071457s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.627300374 +0000 UTC m=+148.004651316" watchObservedRunningTime="2026-02-18 00:27:54.678071457 +0000 UTC m=+148.055422399" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.680861 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jzfq" podStartSLOduration=127.680840304 podStartE2EDuration="2m7.680840304s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:54.676403607 +0000 UTC m=+148.053754559" watchObservedRunningTime="2026-02-18 00:27:54.680840304 +0000 UTC m=+148.058191246" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.694785 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" event={"ID":"6e36bd7a-1a37-44cf-90aa-c8cbb23f7508","Type":"ContainerStarted","Data":"2348c8ea961adcb5a91b4a43def4d273d79e5b7d6880ea10fc5438ae4f106b44"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.718372 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" event={"ID":"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e","Type":"ContainerStarted","Data":"d38b44de7804b1b565b6bed80471e8c6eebc257a6dd7b96e8e1220233b4f2a33"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.719822 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p5gdf" event={"ID":"728ae134-78e1-466d-9d53-8709b0a894ef","Type":"ContainerStarted","Data":"f3ca0e50060dd596bea217886da8cdcf51c54fdbe9179995e83d757b352cbc6a"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.731143 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" event={"ID":"f65decaf-2dc6-495b-826b-b36cfa028e48","Type":"ContainerStarted","Data":"d585f2c0c76bca6482f557366e28969ac796ad016067249a7e919ddacfbed4a6"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.732268 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" event={"ID":"37c05b59-e4e9-41f5-a36b-73c66027b1cc","Type":"ContainerStarted","Data":"70e51a547ff9d2f90157398b95ae96f819daff07647c3b726c21716d285e0377"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.733232 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" event={"ID":"4033b09d-aa99-4b2d-b12f-c5e6f58530f0","Type":"ContainerStarted","Data":"8b248b794ae299b5fe272659f0df4f19b171b59b25f0e803ef5a5683d839242a"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.734157 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" event={"ID":"44555695-834e-4ffc-bee2-b16d7adf6fbc","Type":"ContainerStarted","Data":"8f1568ce5d0e3074ef98c9921249d2c02f3bde5a6122e2b5b288713750418b0a"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.735595 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" event={"ID":"c6462fad-745e-4228-acdd-d0f00c2f066d","Type":"ContainerStarted","Data":"5fbd179f51a2b73db40600cbf936eaee75a8d94febd2061e35e708f518a55543"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.735631 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" event={"ID":"c6462fad-745e-4228-acdd-d0f00c2f066d","Type":"ContainerStarted","Data":"7f38865ecac4e452aca2fa9f9be2d982cfbbabbed9b3225987c6914140512f64"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.737104 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4bt25" event={"ID":"f25e4583-f904-4b03-bcd3-1aca08f953f7","Type":"ContainerStarted","Data":"4b1b04611e60885f09d193cb0103dc5e498fda369f7bf69b9bb02cb565377785"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.738171 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4hc85" event={"ID":"2616897a-54d6-46f8-bc52-a4cf07afe350","Type":"ContainerStarted","Data":"0d10131d7e5a88552034bed465daceaacdfa651afb390000e358b47bee8e2033"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.740226 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" event={"ID":"9de1b399-56b7-430a-b012-55f7ec14d3ed","Type":"ContainerStarted","Data":"6de368372bcf6800d52bd77fbbc3b21711519cb40f6b0118038668b04826b1e4"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.740350 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" event={"ID":"9de1b399-56b7-430a-b012-55f7ec14d3ed","Type":"ContainerStarted","Data":"ac2dc51f695f7fbed7a8fc655e99d55e492d0eb720c2f5694d9542b7022df322"} Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.742762 4847 patch_prober.go:28] interesting pod/console-operator-58897d9998-8d9lm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.742813 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" podUID="197d7a69-19e6-4c08-b68d-f21073ad7487" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.743810 4847 patch_prober.go:28] interesting pod/downloads-7954f5f757-sc5ff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.743933 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sc5ff" podUID="40549f47-53f3-4990-a2b0-921413ba5862" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.752505 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.752816 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.252797072 +0000 UTC m=+148.630148024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.752975 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.753612 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.253593691 +0000 UTC m=+148.630944633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.854059 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.854204 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.354182544 +0000 UTC m=+148.731533476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.854947 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.856444 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.356432358 +0000 UTC m=+148.733783300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:54 crc kubenswrapper[4847]: I0218 00:27:54.956182 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:54 crc kubenswrapper[4847]: E0218 00:27:54.956947 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.456927599 +0000 UTC m=+148.834278561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.041479 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.070901 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.071362 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.571346848 +0000 UTC m=+148.948697790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.087221 4847 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-g7cgq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.087571 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" podUID="0aed4c6f-08ce-4dc7-ae2a-efb45adc0844" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.147887 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qtzlv" podStartSLOduration=128.147865816 podStartE2EDuration="2m8.147865816s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.147710102 +0000 UTC m=+148.525061034" watchObservedRunningTime="2026-02-18 00:27:55.147865816 +0000 UTC m=+148.525216758" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.150310 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mbftk" podStartSLOduration=128.150298625 podStartE2EDuration="2m8.150298625s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.112374494 +0000 UTC m=+148.489725436" watchObservedRunningTime="2026-02-18 00:27:55.150298625 +0000 UTC m=+148.527649567" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.176534 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.176975 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.676955972 +0000 UTC m=+149.054306914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.183089 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cz42z" podStartSLOduration=128.183069871 podStartE2EDuration="2m8.183069871s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.181070212 +0000 UTC m=+148.558421154" watchObservedRunningTime="2026-02-18 00:27:55.183069871 +0000 UTC m=+148.560420813" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.237595 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.240510 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" podStartSLOduration=128.240489645 podStartE2EDuration="2m8.240489645s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.238401595 +0000 UTC m=+148.615752537" watchObservedRunningTime="2026-02-18 00:27:55.240489645 +0000 UTC m=+148.617840587" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.279453 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.280284 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.780271221 +0000 UTC m=+149.157622163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.325390 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4v5gj" podStartSLOduration=128.325359256 podStartE2EDuration="2m8.325359256s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.29504238 +0000 UTC m=+148.672393322" watchObservedRunningTime="2026-02-18 00:27:55.325359256 +0000 UTC m=+148.702710198" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.344873 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:55 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:55 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:55 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.344939 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.348113 4847 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-t4r74 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.348166 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" podUID="b5d0643c-a44f-4323-87a4-f70dc16a4a6b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.377811 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4bt25" podStartSLOduration=8.37779095 podStartE2EDuration="8.37779095s" podCreationTimestamp="2026-02-18 00:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.327870147 +0000 UTC m=+148.705221089" watchObservedRunningTime="2026-02-18 00:27:55.37779095 +0000 UTC m=+148.755141892" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.382156 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.382537 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.882521885 +0000 UTC m=+149.259872827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.447501 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pz8zw" podStartSLOduration=128.447451311 podStartE2EDuration="2m8.447451311s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.376662022 +0000 UTC m=+148.754012964" watchObservedRunningTime="2026-02-18 00:27:55.447451311 +0000 UTC m=+148.824802253" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.447837 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4hc85" podStartSLOduration=8.447830771 podStartE2EDuration="8.447830771s" podCreationTimestamp="2026-02-18 00:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.441248381 +0000 UTC m=+148.818599323" watchObservedRunningTime="2026-02-18 00:27:55.447830771 +0000 UTC m=+148.825181703" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.484403 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.485268 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:55.985239929 +0000 UTC m=+149.362590871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.587269 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.587773 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.087731468 +0000 UTC m=+149.465082410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.588166 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.588568 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.088553748 +0000 UTC m=+149.465904690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.691714 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.691921 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.191887947 +0000 UTC m=+149.569238899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.691984 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.692543 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.192534682 +0000 UTC m=+149.569885624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.782716 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" event={"ID":"8065a5b7-ace7-4dfd-baff-f4d40fe197ab","Type":"ContainerStarted","Data":"dcc2c7efc8c131a93c4952928ed108ad53ed1e23c79a67c3e56bac2112ca6070"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.784109 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.790276 4847 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qvh8m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.790352 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" podUID="8065a5b7-ace7-4dfd-baff-f4d40fe197ab" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.805421 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.806316 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.306292295 +0000 UTC m=+149.683643237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.828099 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" event={"ID":"f65decaf-2dc6-495b-826b-b36cfa028e48","Type":"ContainerStarted","Data":"e923cf511b33af9c31ee1f76624aed3b607d38f98b9ae80edae2472f54c6bb20"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.828645 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.859321 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-hwnbk" podStartSLOduration=128.859279282 podStartE2EDuration="2m8.859279282s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.527913765 +0000 UTC m=+148.905264707" watchObservedRunningTime="2026-02-18 00:27:55.859279282 +0000 UTC m=+149.236630224" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.859421 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" event={"ID":"f6560bd1-3171-4adb-9a64-2ce644a55abf","Type":"ContainerStarted","Data":"47f22ea6818e95d85e60b837b8aa8afb13aa387312d6f4e0457ee58860cea2d7"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.860506 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" podStartSLOduration=128.860498711 podStartE2EDuration="2m8.860498711s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.843404686 +0000 UTC m=+149.220755628" watchObservedRunningTime="2026-02-18 00:27:55.860498711 +0000 UTC m=+149.237849653" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.880143 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" podStartSLOduration=128.880122188 podStartE2EDuration="2m8.880122188s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.878650392 +0000 UTC m=+149.256001334" watchObservedRunningTime="2026-02-18 00:27:55.880122188 +0000 UTC m=+149.257473130" Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.885105 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" event={"ID":"37c05b59-e4e9-41f5-a36b-73c66027b1cc","Type":"ContainerStarted","Data":"eebfc82bb48fff70b621d6cf21a3e5aca7ec96069f5e3d16319b9378f975c325"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.885149 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" event={"ID":"37c05b59-e4e9-41f5-a36b-73c66027b1cc","Type":"ContainerStarted","Data":"73fae9b5ba5694c95f9a991f2b446cb73416397541201f4352bf2850f48aab70"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.901769 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" event={"ID":"30a4cfb1-057e-4d60-a8bd-f9ee95163f73","Type":"ContainerStarted","Data":"22c8863a60d5b72b4275eee1db2c0cb8ce1647f2f08148bfa1b3f02161bdc950"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.909494 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:55 crc kubenswrapper[4847]: E0218 00:27:55.909977 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.409954622 +0000 UTC m=+149.787305564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.919882 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" event={"ID":"cbf93e33-d1c5-4eff-987d-7563a4bd5e45","Type":"ContainerStarted","Data":"cdcfc51172d7972b4b0367d27437ef6646557c6203a755b8ea01b0bd5586fb93"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.932670 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n67v9" event={"ID":"29ecd22e-1180-4df8-98bc-d36c04c8faf3","Type":"ContainerStarted","Data":"92ec46d9a5a30560816d634ea19d72f75ec4468618e13c8c424d3f2ce943e898"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.944520 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" event={"ID":"3ee1601a-2d54-499e-bbe2-69884e9a0678","Type":"ContainerStarted","Data":"3f58d2b3fd15764ca61566f615319335c28a0a58e73a210bdb2e89c7f3ee54db"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.961300 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" event={"ID":"d1d3ce5d-31e2-4602-9e02-076ee07ace01","Type":"ContainerStarted","Data":"7735ff5c6d68ae9c170b20a975a5993902dc5db36984f111214e04320e4c1978"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.971823 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" event={"ID":"9de1b399-56b7-430a-b012-55f7ec14d3ed","Type":"ContainerStarted","Data":"8987702a872a2cceb8e1e148de2eb7b419c0edf9bb7fc6217079a352f1facdde"} Feb 18 00:27:55 crc kubenswrapper[4847]: I0218 00:27:55.988117 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" event={"ID":"4033b09d-aa99-4b2d-b12f-c5e6f58530f0","Type":"ContainerStarted","Data":"f7413979dc062de36093b55e5a86ee0857a4d3beef87c20a41902bcfcc507fce"} Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:55.999066 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" event={"ID":"7f02ce80-0362-4208-bfcf-3f68956dd6f2","Type":"ContainerStarted","Data":"b18ac24febd5e4a9065cb8822d80ac9a78a0ad9ca841a557f171d3b863b81fa6"} Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:55.999687 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.008594 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rj7hf" podStartSLOduration=129.008574327 podStartE2EDuration="2m9.008574327s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:55.95146743 +0000 UTC m=+149.328818372" watchObservedRunningTime="2026-02-18 00:27:56.008574327 +0000 UTC m=+149.385925259" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.012765 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.021369 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" podStartSLOduration=129.021344137 podStartE2EDuration="2m9.021344137s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.011908848 +0000 UTC m=+149.389259810" watchObservedRunningTime="2026-02-18 00:27:56.021344137 +0000 UTC m=+149.398695079" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.023418 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.523376907 +0000 UTC m=+149.900727849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.045049 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" event={"ID":"80ed5db4-b6af-43e0-8d98-8f544e9b6d5e","Type":"ContainerStarted","Data":"3eda8549c9472dfc6be43739268e3f3bad60e67f1b83dec8b19ca7df6e0660ff"} Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.052431 4847 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hwsk5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.052508 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.053827 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6kdbm" podStartSLOduration=129.053809636 podStartE2EDuration="2m9.053809636s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.052920984 +0000 UTC m=+149.430271926" watchObservedRunningTime="2026-02-18 00:27:56.053809636 +0000 UTC m=+149.431160578" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.084821 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-t4r74" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.113364 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" podStartSLOduration=129.113348342 podStartE2EDuration="2m9.113348342s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.112165383 +0000 UTC m=+149.489516325" watchObservedRunningTime="2026-02-18 00:27:56.113348342 +0000 UTC m=+149.490699284" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.120905 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7cgq" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.131438 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.133218 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.633205334 +0000 UTC m=+150.010556276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.151785 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" podStartSLOduration=129.151739644 podStartE2EDuration="2m9.151739644s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.149343636 +0000 UTC m=+149.526694578" watchObservedRunningTime="2026-02-18 00:27:56.151739644 +0000 UTC m=+149.529090586" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.183099 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-lw7v7" podStartSLOduration=129.183080455 podStartE2EDuration="2m9.183080455s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.181523947 +0000 UTC m=+149.558874889" watchObservedRunningTime="2026-02-18 00:27:56.183080455 +0000 UTC m=+149.560431397" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.226742 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6t5r" podStartSLOduration=129.226722635 podStartE2EDuration="2m9.226722635s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.225005913 +0000 UTC m=+149.602356855" watchObservedRunningTime="2026-02-18 00:27:56.226722635 +0000 UTC m=+149.604073577" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.235492 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.235716 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.735693843 +0000 UTC m=+150.113044785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.236178 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.237978 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.737966758 +0000 UTC m=+150.115317700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.271883 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5xpzg" podStartSLOduration=129.271863861 podStartE2EDuration="2m9.271863861s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.270798055 +0000 UTC m=+149.648148997" watchObservedRunningTime="2026-02-18 00:27:56.271863861 +0000 UTC m=+149.649214803" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.338755 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:56 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:56 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:56 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.339040 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.352789 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.353141 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.853128015 +0000 UTC m=+150.230478957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.402934 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" podStartSLOduration=129.402914934 podStartE2EDuration="2m9.402914934s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:56.362377689 +0000 UTC m=+149.739728631" watchObservedRunningTime="2026-02-18 00:27:56.402914934 +0000 UTC m=+149.780265876" Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.454115 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.464043 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:56.964020118 +0000 UTC m=+150.341371060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.573214 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.573676 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.07366049 +0000 UTC m=+150.451011432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.677381 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.677755 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.177743078 +0000 UTC m=+150.555094020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.779198 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.779529 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.279515339 +0000 UTC m=+150.656866281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.881183 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.881555 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.381542767 +0000 UTC m=+150.758893709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.981927 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.982091 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.482065778 +0000 UTC m=+150.859416720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:56 crc kubenswrapper[4847]: I0218 00:27:56.982730 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:56 crc kubenswrapper[4847]: E0218 00:27:56.983026 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.483019041 +0000 UTC m=+150.860369983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.048067 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fdbfa66c1cfc0d884917599006cde7171096da4f931f8451d6cd978260b08bee"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.048116 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7a037350c32cb4fa1d812f64c23ece52f66f3ad5799e136826459f91005414fb"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.048282 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.049436 4847 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xk7s7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.049480 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.049670 4847 generic.go:334] "Generic (PLEG): container finished" podID="b73a0e0b-a65a-4985-b23e-40e2334a47e3" containerID="ea870ac0bd7b79a0d8414450976e8563df25edd64f61ba52836c0d013b2c6864" exitCode=0 Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.049725 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" event={"ID":"b73a0e0b-a65a-4985-b23e-40e2334a47e3","Type":"ContainerDied","Data":"ea870ac0bd7b79a0d8414450976e8563df25edd64f61ba52836c0d013b2c6864"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.050435 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"884e3ebfafe537cf3f8cec84da5d98e51cce4a1b00dcbf3df55edf72983c99f5"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.051595 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"12c0db6763fdb6ff0d8f3004caf42525ae74c1e4482270847dcb6ec84b2a944e"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.051634 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"39bb79b0215abb362b0c7869f4363e5d2a717d60c773fef0d89b4d3def6e8af2"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.053987 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6hrpj" event={"ID":"d1d3ce5d-31e2-4602-9e02-076ee07ace01","Type":"ContainerStarted","Data":"eeee9e27fc736631db899f9cdf07246d50a739a4534e86d24f4ac0590b070f49"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.058767 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-n67v9" event={"ID":"29ecd22e-1180-4df8-98bc-d36c04c8faf3","Type":"ContainerStarted","Data":"948feee1d6bf024c557176ad17ed3a0b599357f25c6345be9fadd98082c316a3"} Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.061984 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.068785 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qvh8m" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.084213 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.084342 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.584321491 +0000 UTC m=+150.961672433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.084538 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.086810 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.586802302 +0000 UTC m=+150.964153244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.186146 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.186333 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.686302158 +0000 UTC m=+151.063653100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.186463 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.186768 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.686755329 +0000 UTC m=+151.064106271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.288295 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.289155 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.789137635 +0000 UTC m=+151.166488577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.338529 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:57 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:57 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:57 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.338587 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.340258 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-n67v9" podStartSLOduration=10.340227486 podStartE2EDuration="10.340227486s" podCreationTimestamp="2026-02-18 00:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:57.335575613 +0000 UTC m=+150.712926555" watchObservedRunningTime="2026-02-18 00:27:57.340227486 +0000 UTC m=+150.717578428" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.391151 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.391594 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.891575763 +0000 UTC m=+151.268926705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.491769 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.491945 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.99191758 +0000 UTC m=+151.369268522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.492326 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.492733 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:57.992715269 +0000 UTC m=+151.370066211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.595126 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.595462 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.095415673 +0000 UTC m=+151.472766625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.595520 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.595870 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.095858974 +0000 UTC m=+151.473209906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.681261 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.698771 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.699078 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.699129 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjz6q\" (UniqueName: \"kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.699178 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.699244 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.699313 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.199297886 +0000 UTC m=+151.576648828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.699680 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.711919 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.799796 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.799874 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.799911 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.799947 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjz6q\" (UniqueName: \"kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.800676 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.800949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.801217 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.301203631 +0000 UTC m=+151.678554573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.846408 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjz6q\" (UniqueName: \"kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q\") pod \"community-operators-px9xt\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.871327 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.872347 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.885067 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.905520 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.905826 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.905864 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.905885 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zt2q\" (UniqueName: \"kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:57 crc kubenswrapper[4847]: I0218 00:27:57.905893 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:27:57 crc kubenswrapper[4847]: E0218 00:27:57.905989 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.405972985 +0000 UTC m=+151.783323927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008151 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008205 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zt2q\" (UniqueName: \"kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008256 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008321 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008839 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.008881 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.009095 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.509074339 +0000 UTC m=+151.886425281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.038892 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zt2q\" (UniqueName: \"kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q\") pod \"certified-operators-tqxr4\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.041351 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.056347 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.057982 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.059109 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.071589 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" event={"ID":"f6560bd1-3171-4adb-9a64-2ce644a55abf","Type":"ContainerStarted","Data":"de9d2604c6859c19366a794750911636f473e2c950b38a16d1a9ebfd90fa0da0"} Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.078182 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"35f442425c462764d33b3cf971bfcea9af579efdadf7f0dbe9d4b1b756b0449a"} Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.079344 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-n67v9" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.083461 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.098878 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hc5ks" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.115177 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.115436 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.615400061 +0000 UTC m=+151.992751003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.115570 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhvrj\" (UniqueName: \"kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.115638 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.115837 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.115987 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.117842 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.61782732 +0000 UTC m=+151.995178262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.210387 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.218529 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.218895 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.218971 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhvrj\" (UniqueName: \"kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.219059 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.220277 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.220375 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.72035385 +0000 UTC m=+152.097704792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.220621 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.250440 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhvrj\" (UniqueName: \"kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj\") pod \"community-operators-hmdff\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.268307 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.269307 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.290435 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.322168 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.322227 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.322272 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.322306 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.322754 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.822738936 +0000 UTC m=+152.200089878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.346803 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:58 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:58 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:58 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.346858 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.364757 4847 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.424210 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.425094 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.925050641 +0000 UTC m=+152.302401583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.425328 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.425444 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.425525 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.425573 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.427194 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.427454 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.428098 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:58.928072104 +0000 UTC m=+152.305423046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.469983 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.485677 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l\") pod \"certified-operators-ckhs7\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.526649 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.527154 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-18 00:27:59.02713223 +0000 UTC m=+152.404483162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.571140 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.619921 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.621281 4847 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T00:27:58.364776327Z","Handler":null,"Name":""} Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.636294 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume\") pod \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.636665 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume\") pod \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.636717 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtfjh\" (UniqueName: \"kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh\") pod \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\" (UID: \"b73a0e0b-a65a-4985-b23e-40e2334a47e3\") " Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.638712 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.638724 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b73a0e0b-a65a-4985-b23e-40e2334a47e3" (UID: "b73a0e0b-a65a-4985-b23e-40e2334a47e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.639131 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b73a0e0b-a65a-4985-b23e-40e2334a47e3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:27:58 crc kubenswrapper[4847]: E0218 00:27:58.639735 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:27:59.139716394 +0000 UTC m=+152.517067336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9jnmn" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.644346 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh" (OuterVolumeSpecName: "kube-api-access-jtfjh") pod "b73a0e0b-a65a-4985-b23e-40e2334a47e3" (UID: "b73a0e0b-a65a-4985-b23e-40e2334a47e3"). InnerVolumeSpecName "kube-api-access-jtfjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.654544 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b73a0e0b-a65a-4985-b23e-40e2334a47e3" (UID: "b73a0e0b-a65a-4985-b23e-40e2334a47e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.663074 4847 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.663490 4847 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.740208 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.740614 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b73a0e0b-a65a-4985-b23e-40e2334a47e3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.740634 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtfjh\" (UniqueName: \"kubernetes.io/projected/b73a0e0b-a65a-4985-b23e-40e2334a47e3-kube-api-access-jtfjh\") on node \"crc\" DevicePath \"\"" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.791957 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.796430 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:27:58 crc kubenswrapper[4847]: W0218 00:27:58.817176 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0e353b_9f34_432f_92f1_9102f53aeff3.slice/crio-c37d6faa41a67d8cf833464a7644ff61cb3fe8b1b74781af65ecbc1b50170c1b WatchSource:0}: Error finding container c37d6faa41a67d8cf833464a7644ff61cb3fe8b1b74781af65ecbc1b50170c1b: Status 404 returned error can't find the container with id c37d6faa41a67d8cf833464a7644ff61cb3fe8b1b74781af65ecbc1b50170c1b Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.842079 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.863694 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.863741 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.903814 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9jnmn\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.912044 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:27:58 crc kubenswrapper[4847]: I0218 00:27:58.960772 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.070191 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.153198 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerStarted","Data":"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.153572 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerStarted","Data":"c37d6faa41a67d8cf833464a7644ff61cb3fe8b1b74781af65ecbc1b50170c1b"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.175196 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerStarted","Data":"829f7a68396de8dee18a6d8dfad575a3917148a0e9db3169cb6fb52f4061fb1b"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.181197 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.215340 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" event={"ID":"f6560bd1-3171-4adb-9a64-2ce644a55abf","Type":"ContainerStarted","Data":"189e7eb2bd65e4e4d2659992a70eed7a729ff3260b2c7ad58b4401ebcf649eb5"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.215694 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" event={"ID":"f6560bd1-3171-4adb-9a64-2ce644a55abf","Type":"ContainerStarted","Data":"e5c3d0c2fb356d69a67093f4997c88f7383bf2be1a0130ce05aff9d65bae1b5e"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.230113 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" event={"ID":"b73a0e0b-a65a-4985-b23e-40e2334a47e3","Type":"ContainerDied","Data":"30dab2b1cc01cbe5f4b7e230e51567f02c5e3ef157e9dbc6bf75d1cd2f2d13cd"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.230337 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30dab2b1cc01cbe5f4b7e230e51567f02c5e3ef157e9dbc6bf75d1cd2f2d13cd" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.230467 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.247732 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerStarted","Data":"76be970c28143bf498e5fa4fe1e291728c3cf57fa59966ed89f8f0b127f17816"} Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.264528 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-j66wn" podStartSLOduration=12.264510806 podStartE2EDuration="12.264510806s" podCreationTimestamp="2026-02-18 00:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:27:59.25975989 +0000 UTC m=+152.637110832" watchObservedRunningTime="2026-02-18 00:27:59.264510806 +0000 UTC m=+152.641861748" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.264796 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.334656 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:27:59 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:27:59 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:27:59 crc kubenswrapper[4847]: healthz check failed Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.335092 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.412684 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.507115 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:27:59 crc kubenswrapper[4847]: W0218 00:27:59.522885 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4c85a9_c568_472e_b05b_546a70da9391.slice/crio-8d982241b49c3aa1d2ab35918d57a637eba0010b77172c6c9d81b9577a727aa3 WatchSource:0}: Error finding container 8d982241b49c3aa1d2ab35918d57a637eba0010b77172c6c9d81b9577a727aa3: Status 404 returned error can't find the container with id 8d982241b49c3aa1d2ab35918d57a637eba0010b77172c6c9d81b9577a727aa3 Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.848551 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:27:59 crc kubenswrapper[4847]: E0218 00:27:59.849411 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b73a0e0b-a65a-4985-b23e-40e2334a47e3" containerName="collect-profiles" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.849426 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b73a0e0b-a65a-4985-b23e-40e2334a47e3" containerName="collect-profiles" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.849546 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b73a0e0b-a65a-4985-b23e-40e2334a47e3" containerName="collect-profiles" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.850312 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.850735 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.850895 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.854246 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.867651 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.874871 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.924129 4847 patch_prober.go:28] interesting pod/downloads-7954f5f757-sc5ff container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.924193 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-sc5ff" podUID="40549f47-53f3-4990-a2b0-921413ba5862" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.924225 4847 patch_prober.go:28] interesting pod/downloads-7954f5f757-sc5ff container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.924289 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-sc5ff" podUID="40549f47-53f3-4990-a2b0-921413ba5862" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.929466 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.930367 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.931706 4847 patch_prober.go:28] interesting pod/console-f9d7485db-tfxw5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.931757 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tfxw5" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.965223 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.965279 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:27:59 crc kubenswrapper[4847]: I0218 00:27:59.965324 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t5j\" (UniqueName: \"kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.063662 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.063958 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.066724 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.066795 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.066818 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2t5j\" (UniqueName: \"kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.067773 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.068505 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.074472 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.097400 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2t5j\" (UniqueName: \"kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j\") pod \"redhat-marketplace-bv8f2\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.178126 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.185039 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8d9lm" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.254905 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.256684 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.266759 4847 generic.go:334] "Generic (PLEG): container finished" podID="15a457ff-fd78-446b-85ca-acd23651863f" containerID="83019b7cc4a0cc21da344585024440aa2ce7b1dcfdde7881ef39358e4cda322a" exitCode=0 Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.267239 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerDied","Data":"83019b7cc4a0cc21da344585024440aa2ce7b1dcfdde7881ef39358e4cda322a"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.267279 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerStarted","Data":"1f70ba2f62c676c4dc49b8a7ed5cd9239ac348919de96315503917f3596374ce"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.271902 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.279663 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" event={"ID":"3f4c85a9-c568-472e-b05b-546a70da9391","Type":"ContainerStarted","Data":"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.279710 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" event={"ID":"3f4c85a9-c568-472e-b05b-546a70da9391","Type":"ContainerStarted","Data":"8d982241b49c3aa1d2ab35918d57a637eba0010b77172c6c9d81b9577a727aa3"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.280089 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.305397 4847 generic.go:334] "Generic (PLEG): container finished" podID="40f2a712-6701-4c22-94c2-6a644742459b" containerID="07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6" exitCode=0 Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.305638 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerDied","Data":"07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.311362 4847 generic.go:334] "Generic (PLEG): container finished" podID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerID="ba92f2ec1a88fd702420bfc78805e15054a1f478c279b03859220391618e4491" exitCode=0 Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.311446 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerDied","Data":"ba92f2ec1a88fd702420bfc78805e15054a1f478c279b03859220391618e4491"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.317781 4847 generic.go:334] "Generic (PLEG): container finished" podID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerID="eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a" exitCode=0 Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.318406 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerDied","Data":"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a"} Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.324754 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d4c9w" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.329103 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.331983 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6dmsr" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.332802 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" podStartSLOduration=133.332779529 podStartE2EDuration="2m13.332779529s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:28:00.329488519 +0000 UTC m=+153.706839471" watchObservedRunningTime="2026-02-18 00:28:00.332779529 +0000 UTC m=+153.710130471" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.337538 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:00 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:00 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:00 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.337618 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.373297 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcqpk\" (UniqueName: \"kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.373393 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.373480 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.474618 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcqpk\" (UniqueName: \"kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.475706 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.475953 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.485764 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.487322 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.510549 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcqpk\" (UniqueName: \"kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk\") pod \"redhat-marketplace-jcs2d\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.541178 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:28:00 crc kubenswrapper[4847]: W0218 00:28:00.567909 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod767c924a_1203_477f_8501_a65f63965047.slice/crio-6b94b6f44d5dd2acdf3cb899f318c2cae6ac8a600883ae27765821b73b303ec2 WatchSource:0}: Error finding container 6b94b6f44d5dd2acdf3cb899f318c2cae6ac8a600883ae27765821b73b303ec2: Status 404 returned error can't find the container with id 6b94b6f44d5dd2acdf3cb899f318c2cae6ac8a600883ae27765821b73b303ec2 Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.582148 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.852435 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.854074 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.866338 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.872349 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.984402 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zjk\" (UniqueName: \"kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.984478 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:00 crc kubenswrapper[4847]: I0218 00:28:00.984645 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.074362 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.085957 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8zjk\" (UniqueName: \"kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.086008 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.086120 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.086713 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.087381 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.122815 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8zjk\" (UniqueName: \"kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk\") pod \"redhat-operators-lr9xc\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: W0218 00:28:01.123680 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23c01b00_fb74_42f7_8a1a_343e78623f37.slice/crio-5e206661476bfc65daa8c05e69aa7464a2db76c311407b92bcbb7ec93e7ead9c WatchSource:0}: Error finding container 5e206661476bfc65daa8c05e69aa7464a2db76c311407b92bcbb7ec93e7ead9c: Status 404 returned error can't find the container with id 5e206661476bfc65daa8c05e69aa7464a2db76c311407b92bcbb7ec93e7ead9c Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.249900 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.252569 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.252626 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.271060 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.334606 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:01 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:01 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:01 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.334718 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.345481 4847 generic.go:334] "Generic (PLEG): container finished" podID="767c924a-1203-477f-8501-a65f63965047" containerID="e92ff5160c160d87e9df4f057b7bf81f0d8a6d862fc449ea593af3bf458eeb98" exitCode=0 Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.345888 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv8f2" event={"ID":"767c924a-1203-477f-8501-a65f63965047","Type":"ContainerDied","Data":"e92ff5160c160d87e9df4f057b7bf81f0d8a6d862fc449ea593af3bf458eeb98"} Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.345934 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv8f2" event={"ID":"767c924a-1203-477f-8501-a65f63965047","Type":"ContainerStarted","Data":"6b94b6f44d5dd2acdf3cb899f318c2cae6ac8a600883ae27765821b73b303ec2"} Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.361708 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerStarted","Data":"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d"} Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.361760 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerStarted","Data":"5e206661476bfc65daa8c05e69aa7464a2db76c311407b92bcbb7ec93e7ead9c"} Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.393448 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnv5g\" (UniqueName: \"kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.393523 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.393777 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.496392 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnv5g\" (UniqueName: \"kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.496443 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.496483 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.498118 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.498827 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.536366 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnv5g\" (UniqueName: \"kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g\") pod \"redhat-operators-qprjq\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.602534 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.704578 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.705580 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.707745 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.710917 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.711265 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.759723 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.764306 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.765359 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.773647 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.774171 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.792667 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.806201 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.806641 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.909772 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.909816 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.909844 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.909874 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.909967 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:01 crc kubenswrapper[4847]: I0218 00:28:01.938759 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.011193 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.011245 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.011541 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.040163 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.042634 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.103116 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.293979 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.376257 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:02 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:02 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:02 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.376345 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.394422 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerStarted","Data":"90d51cdfdd3e191167ad84d5377d747742e93232e8c72a76ee02f4489965dd9e"} Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.413926 4847 generic.go:334] "Generic (PLEG): container finished" podID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerID="10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d" exitCode=0 Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.414018 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerDied","Data":"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d"} Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.430994 4847 generic.go:334] "Generic (PLEG): container finished" podID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerID="95c6af5db99a1eef682e3ab701a20369b8127dd271a008ea5972dd0367c5a48d" exitCode=0 Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.431034 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerDied","Data":"95c6af5db99a1eef682e3ab701a20369b8127dd271a008ea5972dd0367c5a48d"} Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.431060 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerStarted","Data":"62ad798fb38bcf35da45f54e13b263c3ca6ae6dec395357d48327edaa36b452e"} Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.657046 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 18 00:28:02 crc kubenswrapper[4847]: I0218 00:28:02.820220 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.332844 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:03 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:03 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:03 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.333466 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.508134 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c17b0366-f840-4bce-96cb-bb8e90eaf4fa","Type":"ContainerStarted","Data":"f8b7ae29d444bdd4e5ae3096d71d4e06b4774af27734ab85f858cc51061a43e5"} Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.513002 4847 generic.go:334] "Generic (PLEG): container finished" podID="4419c48a-0a19-486b-ad17-b88461b9377b" containerID="60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216" exitCode=0 Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.513065 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerDied","Data":"60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216"} Feb 18 00:28:03 crc kubenswrapper[4847]: I0218 00:28:03.518579 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf0bb0c0-dce7-448a-99a2-b33c10f4288d","Type":"ContainerStarted","Data":"4a2a24dc2b57dbe3c767d32ae59f28ecdaf27e9db4676648c6e3dec767627cb2"} Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.333174 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:04 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:04 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:04 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.333253 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.530406 4847 generic.go:334] "Generic (PLEG): container finished" podID="cf0bb0c0-dce7-448a-99a2-b33c10f4288d" containerID="7a9cbfb84576e4913543b2ccb851178d112e40dc7a4a6cb7604fcaedf8621cd4" exitCode=0 Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.530508 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf0bb0c0-dce7-448a-99a2-b33c10f4288d","Type":"ContainerDied","Data":"7a9cbfb84576e4913543b2ccb851178d112e40dc7a4a6cb7604fcaedf8621cd4"} Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.535078 4847 generic.go:334] "Generic (PLEG): container finished" podID="c17b0366-f840-4bce-96cb-bb8e90eaf4fa" containerID="deeb9491d80a1fbcddd057579262e0b821d0fbc5b03b0dca28d7f307799c5ac5" exitCode=0 Feb 18 00:28:04 crc kubenswrapper[4847]: I0218 00:28:04.535123 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c17b0366-f840-4bce-96cb-bb8e90eaf4fa","Type":"ContainerDied","Data":"deeb9491d80a1fbcddd057579262e0b821d0fbc5b03b0dca28d7f307799c5ac5"} Feb 18 00:28:05 crc kubenswrapper[4847]: I0218 00:28:05.335447 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:05 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:05 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:05 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:05 crc kubenswrapper[4847]: I0218 00:28:05.335573 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:05 crc kubenswrapper[4847]: I0218 00:28:05.815997 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-n67v9" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.061260 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.079892 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.202309 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access\") pod \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.202473 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access\") pod \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.202757 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir\") pod \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\" (UID: \"c17b0366-f840-4bce-96cb-bb8e90eaf4fa\") " Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.203050 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir\") pod \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\" (UID: \"cf0bb0c0-dce7-448a-99a2-b33c10f4288d\") " Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.203659 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf0bb0c0-dce7-448a-99a2-b33c10f4288d" (UID: "cf0bb0c0-dce7-448a-99a2-b33c10f4288d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.211323 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c17b0366-f840-4bce-96cb-bb8e90eaf4fa" (UID: "c17b0366-f840-4bce-96cb-bb8e90eaf4fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.211420 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf0bb0c0-dce7-448a-99a2-b33c10f4288d" (UID: "cf0bb0c0-dce7-448a-99a2-b33c10f4288d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.211467 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c17b0366-f840-4bce-96cb-bb8e90eaf4fa" (UID: "c17b0366-f840-4bce-96cb-bb8e90eaf4fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.305567 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.305649 4847 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.305668 4847 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf0bb0c0-dce7-448a-99a2-b33c10f4288d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.305687 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c17b0366-f840-4bce-96cb-bb8e90eaf4fa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.333007 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:06 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:06 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:06 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.333122 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.611112 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cf0bb0c0-dce7-448a-99a2-b33c10f4288d","Type":"ContainerDied","Data":"4a2a24dc2b57dbe3c767d32ae59f28ecdaf27e9db4676648c6e3dec767627cb2"} Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.611155 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2a24dc2b57dbe3c767d32ae59f28ecdaf27e9db4676648c6e3dec767627cb2" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.611219 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.615226 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c17b0366-f840-4bce-96cb-bb8e90eaf4fa","Type":"ContainerDied","Data":"f8b7ae29d444bdd4e5ae3096d71d4e06b4774af27734ab85f858cc51061a43e5"} Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.615263 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b7ae29d444bdd4e5ae3096d71d4e06b4774af27734ab85f858cc51061a43e5" Feb 18 00:28:06 crc kubenswrapper[4847]: I0218 00:28:06.615306 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 18 00:28:07 crc kubenswrapper[4847]: I0218 00:28:07.334424 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:07 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:07 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:07 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:07 crc kubenswrapper[4847]: I0218 00:28:07.334482 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:08 crc kubenswrapper[4847]: I0218 00:28:08.332413 4847 patch_prober.go:28] interesting pod/router-default-5444994796-hc9j8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:28:08 crc kubenswrapper[4847]: [-]has-synced failed: reason withheld Feb 18 00:28:08 crc kubenswrapper[4847]: [+]process-running ok Feb 18 00:28:08 crc kubenswrapper[4847]: healthz check failed Feb 18 00:28:08 crc kubenswrapper[4847]: I0218 00:28:08.332834 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hc9j8" podUID="bcbd7c3e-0fdd-487e-9b0d-404e5c7666d4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:28:09 crc kubenswrapper[4847]: I0218 00:28:09.381551 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:28:09 crc kubenswrapper[4847]: I0218 00:28:09.385039 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hc9j8" Feb 18 00:28:09 crc kubenswrapper[4847]: I0218 00:28:09.930086 4847 patch_prober.go:28] interesting pod/console-f9d7485db-tfxw5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 18 00:28:09 crc kubenswrapper[4847]: I0218 00:28:09.930632 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-tfxw5" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 18 00:28:09 crc kubenswrapper[4847]: I0218 00:28:09.944273 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-sc5ff" Feb 18 00:28:10 crc kubenswrapper[4847]: I0218 00:28:10.719153 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:28:10 crc kubenswrapper[4847]: I0218 00:28:10.728291 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1a7318b6-f24d-4785-bd56-ad5ecec493da-metrics-certs\") pod \"network-metrics-daemon-5rg76\" (UID: \"1a7318b6-f24d-4785-bd56-ad5ecec493da\") " pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:28:10 crc kubenswrapper[4847]: I0218 00:28:10.852694 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5rg76" Feb 18 00:28:19 crc kubenswrapper[4847]: I0218 00:28:19.077161 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:28:19 crc kubenswrapper[4847]: I0218 00:28:19.933175 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:28:19 crc kubenswrapper[4847]: I0218 00:28:19.938224 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:28:23 crc kubenswrapper[4847]: I0218 00:28:23.491466 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:23 crc kubenswrapper[4847]: I0218 00:28:23.491933 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:24 crc kubenswrapper[4847]: I0218 00:28:24.777864 4847 generic.go:334] "Generic (PLEG): container finished" podID="17d9fff8-b1cd-4124-8dc8-607c81e15c21" containerID="7b9358a8433df226df2c3c3dc77c192ccc6699953b493d6c4c1d0f833d3a5e8b" exitCode=0 Feb 18 00:28:24 crc kubenswrapper[4847]: I0218 00:28:24.778444 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-98mw7" event={"ID":"17d9fff8-b1cd-4124-8dc8-607c81e15c21","Type":"ContainerDied","Data":"7b9358a8433df226df2c3c3dc77c192ccc6699953b493d6c4c1d0f833d3a5e8b"} Feb 18 00:28:27 crc kubenswrapper[4847]: I0218 00:28:27.076366 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5rg76"] Feb 18 00:28:28 crc kubenswrapper[4847]: I0218 00:28:28.804568 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5rg76" event={"ID":"1a7318b6-f24d-4785-bd56-ad5ecec493da","Type":"ContainerStarted","Data":"ee02fc0619794746785fbca213b0a1f523c9f2810c4c57a36d3864333bb5d60d"} Feb 18 00:28:29 crc kubenswrapper[4847]: I0218 00:28:29.976019 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.120684 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca\") pod \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.120736 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8952z\" (UniqueName: \"kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z\") pod \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\" (UID: \"17d9fff8-b1cd-4124-8dc8-607c81e15c21\") " Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.121830 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca" (OuterVolumeSpecName: "serviceca") pod "17d9fff8-b1cd-4124-8dc8-607c81e15c21" (UID: "17d9fff8-b1cd-4124-8dc8-607c81e15c21"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.128149 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z" (OuterVolumeSpecName: "kube-api-access-8952z") pod "17d9fff8-b1cd-4124-8dc8-607c81e15c21" (UID: "17d9fff8-b1cd-4124-8dc8-607c81e15c21"). InnerVolumeSpecName "kube-api-access-8952z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.222261 4847 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/17d9fff8-b1cd-4124-8dc8-607c81e15c21-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.222306 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8952z\" (UniqueName: \"kubernetes.io/projected/17d9fff8-b1cd-4124-8dc8-607c81e15c21-kube-api-access-8952z\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.720594 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-n6zk7" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.820212 4847 generic.go:334] "Generic (PLEG): container finished" podID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerID="ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.820555 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerDied","Data":"ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.824139 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-98mw7" event={"ID":"17d9fff8-b1cd-4124-8dc8-607c81e15c21","Type":"ContainerDied","Data":"9a305ca93470f9dcd360556a4e4b17d7fd40b9820c4506c8eec2c34445308bce"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.824185 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a305ca93470f9dcd360556a4e4b17d7fd40b9820c4506c8eec2c34445308bce" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.824258 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-98mw7" Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.845907 4847 generic.go:334] "Generic (PLEG): container finished" podID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerID="f128dfb0226ce3ff25bc536358c92a3dcfda6686f89b68d0dc5fef9e0d2f2bce" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.846016 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerDied","Data":"f128dfb0226ce3ff25bc536358c92a3dcfda6686f89b68d0dc5fef9e0d2f2bce"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.855223 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5rg76" event={"ID":"1a7318b6-f24d-4785-bd56-ad5ecec493da","Type":"ContainerStarted","Data":"adb8721cb4c5ded481a740e033d6e92d96c7e76aa1f20a6ff706fe7d1f367bc1"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.860782 4847 generic.go:334] "Generic (PLEG): container finished" podID="767c924a-1203-477f-8501-a65f63965047" containerID="294658603ba404284b4ed09ccc3da841ce08fddc1385132aede0ae99ea303576" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.860851 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv8f2" event={"ID":"767c924a-1203-477f-8501-a65f63965047","Type":"ContainerDied","Data":"294658603ba404284b4ed09ccc3da841ce08fddc1385132aede0ae99ea303576"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.867939 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerStarted","Data":"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.873321 4847 generic.go:334] "Generic (PLEG): container finished" podID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerID="86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.873432 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerDied","Data":"86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.882296 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerStarted","Data":"90b9a71fd35a2013abfcbab5bc4b2a5ce4ed994c23284cfeb2427d681386f054"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.885908 4847 generic.go:334] "Generic (PLEG): container finished" podID="15a457ff-fd78-446b-85ca-acd23651863f" containerID="b0d4906f150fc21daea04395ce097f94f660e5fff0f5e85616e0f20a9f2a362f" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.885994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerDied","Data":"b0d4906f150fc21daea04395ce097f94f660e5fff0f5e85616e0f20a9f2a362f"} Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.893083 4847 generic.go:334] "Generic (PLEG): container finished" podID="40f2a712-6701-4c22-94c2-6a644742459b" containerID="531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6" exitCode=0 Feb 18 00:28:30 crc kubenswrapper[4847]: I0218 00:28:30.893149 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerDied","Data":"531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6"} Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.903869 4847 generic.go:334] "Generic (PLEG): container finished" podID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerID="90b9a71fd35a2013abfcbab5bc4b2a5ce4ed994c23284cfeb2427d681386f054" exitCode=0 Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.903994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerDied","Data":"90b9a71fd35a2013abfcbab5bc4b2a5ce4ed994c23284cfeb2427d681386f054"} Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.908817 4847 generic.go:334] "Generic (PLEG): container finished" podID="4419c48a-0a19-486b-ad17-b88461b9377b" containerID="ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360" exitCode=0 Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.908880 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerDied","Data":"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360"} Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.910367 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5rg76" event={"ID":"1a7318b6-f24d-4785-bd56-ad5ecec493da","Type":"ContainerStarted","Data":"eee7e1b2efeb4338d23619a06410021653c958a86400decf4a28c31f38594508"} Feb 18 00:28:31 crc kubenswrapper[4847]: I0218 00:28:31.975050 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5rg76" podStartSLOduration=164.975025673 podStartE2EDuration="2m44.975025673s" podCreationTimestamp="2026-02-18 00:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:28:31.950854866 +0000 UTC m=+185.328205818" watchObservedRunningTime="2026-02-18 00:28:31.975025673 +0000 UTC m=+185.352376615" Feb 18 00:28:32 crc kubenswrapper[4847]: I0218 00:28:32.919950 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerStarted","Data":"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970"} Feb 18 00:28:32 crc kubenswrapper[4847]: I0218 00:28:32.947102 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jcs2d" podStartSLOduration=3.256428049 podStartE2EDuration="32.947060419s" podCreationTimestamp="2026-02-18 00:28:00 +0000 UTC" firstStartedPulling="2026-02-18 00:28:02.417481595 +0000 UTC m=+155.794832557" lastFinishedPulling="2026-02-18 00:28:32.108113985 +0000 UTC m=+185.485464927" observedRunningTime="2026-02-18 00:28:32.941811021 +0000 UTC m=+186.319161983" watchObservedRunningTime="2026-02-18 00:28:32.947060419 +0000 UTC m=+186.324411361" Feb 18 00:28:33 crc kubenswrapper[4847]: I0218 00:28:33.928436 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv8f2" event={"ID":"767c924a-1203-477f-8501-a65f63965047","Type":"ContainerStarted","Data":"14e756059bbf6d2dcfde255e43a1bae7c1d3a3fd429e8481a40dfac04eb30656"} Feb 18 00:28:34 crc kubenswrapper[4847]: I0218 00:28:34.547138 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 18 00:28:34 crc kubenswrapper[4847]: I0218 00:28:34.973412 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bv8f2" podStartSLOduration=4.147449052 podStartE2EDuration="35.973392238s" podCreationTimestamp="2026-02-18 00:27:59 +0000 UTC" firstStartedPulling="2026-02-18 00:28:01.357883323 +0000 UTC m=+154.735234265" lastFinishedPulling="2026-02-18 00:28:33.183826499 +0000 UTC m=+186.561177451" observedRunningTime="2026-02-18 00:28:34.969304279 +0000 UTC m=+188.346655211" watchObservedRunningTime="2026-02-18 00:28:34.973392238 +0000 UTC m=+188.350743180" Feb 18 00:28:35 crc kubenswrapper[4847]: I0218 00:28:35.957967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerStarted","Data":"158b1aa6fdc6be9abadd1dbbf255249e5b1875d921952319b76e99629704d10a"} Feb 18 00:28:35 crc kubenswrapper[4847]: I0218 00:28:35.974586 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ckhs7" podStartSLOduration=3.305690364 podStartE2EDuration="37.97456635s" podCreationTimestamp="2026-02-18 00:27:58 +0000 UTC" firstStartedPulling="2026-02-18 00:28:00.276796799 +0000 UTC m=+153.654147741" lastFinishedPulling="2026-02-18 00:28:34.945672785 +0000 UTC m=+188.323023727" observedRunningTime="2026-02-18 00:28:35.973673179 +0000 UTC m=+189.351024121" watchObservedRunningTime="2026-02-18 00:28:35.97456635 +0000 UTC m=+189.351917292" Feb 18 00:28:37 crc kubenswrapper[4847]: I0218 00:28:37.973050 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerStarted","Data":"72e03cbd8e7dfb83be77793ecd1727d0259fe8690ac922c4d08a1b712eeb3d3a"} Feb 18 00:28:37 crc kubenswrapper[4847]: I0218 00:28:37.976267 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerStarted","Data":"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76"} Feb 18 00:28:37 crc kubenswrapper[4847]: I0218 00:28:37.978309 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerStarted","Data":"43f086b3b289848710c80c7c5bf69ee1dc5feed3f63a10ccf01fa8dae64e6365"} Feb 18 00:28:37 crc kubenswrapper[4847]: I0218 00:28:37.980477 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerStarted","Data":"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4"} Feb 18 00:28:37 crc kubenswrapper[4847]: I0218 00:28:37.982706 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerStarted","Data":"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39"} Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.037539 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-px9xt" podStartSLOduration=2.5895738809999997 podStartE2EDuration="41.037518479s" podCreationTimestamp="2026-02-18 00:27:57 +0000 UTC" firstStartedPulling="2026-02-18 00:27:59.180934307 +0000 UTC m=+152.558285249" lastFinishedPulling="2026-02-18 00:28:37.628878905 +0000 UTC m=+191.006229847" observedRunningTime="2026-02-18 00:28:38.032932017 +0000 UTC m=+191.410282959" watchObservedRunningTime="2026-02-18 00:28:38.037518479 +0000 UTC m=+191.414869421" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.038788 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tqxr4" podStartSLOduration=4.326642854 podStartE2EDuration="41.038780799s" podCreationTimestamp="2026-02-18 00:27:57 +0000 UTC" firstStartedPulling="2026-02-18 00:28:00.316083953 +0000 UTC m=+153.693434895" lastFinishedPulling="2026-02-18 00:28:37.028221868 +0000 UTC m=+190.405572840" observedRunningTime="2026-02-18 00:28:38.006300641 +0000 UTC m=+191.383651583" watchObservedRunningTime="2026-02-18 00:28:38.038780799 +0000 UTC m=+191.416131741" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.057437 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.057503 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.095065 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qprjq" podStartSLOduration=2.979366144 podStartE2EDuration="37.095044586s" podCreationTimestamp="2026-02-18 00:28:01 +0000 UTC" firstStartedPulling="2026-02-18 00:28:03.514795842 +0000 UTC m=+156.892146774" lastFinishedPulling="2026-02-18 00:28:37.630474274 +0000 UTC m=+191.007825216" observedRunningTime="2026-02-18 00:28:38.091319885 +0000 UTC m=+191.468670847" watchObservedRunningTime="2026-02-18 00:28:38.095044586 +0000 UTC m=+191.472395528" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.095425 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lr9xc" podStartSLOduration=2.952202071 podStartE2EDuration="38.095420955s" podCreationTimestamp="2026-02-18 00:28:00 +0000 UTC" firstStartedPulling="2026-02-18 00:28:02.437783618 +0000 UTC m=+155.815134560" lastFinishedPulling="2026-02-18 00:28:37.581002492 +0000 UTC m=+190.958353444" observedRunningTime="2026-02-18 00:28:38.064992876 +0000 UTC m=+191.442343818" watchObservedRunningTime="2026-02-18 00:28:38.095420955 +0000 UTC m=+191.472771897" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.127468 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hmdff" podStartSLOduration=2.836162443 podStartE2EDuration="40.127441362s" podCreationTimestamp="2026-02-18 00:27:58 +0000 UTC" firstStartedPulling="2026-02-18 00:28:00.310016016 +0000 UTC m=+153.687366958" lastFinishedPulling="2026-02-18 00:28:37.601294935 +0000 UTC m=+190.978645877" observedRunningTime="2026-02-18 00:28:38.118824663 +0000 UTC m=+191.496175605" watchObservedRunningTime="2026-02-18 00:28:38.127441362 +0000 UTC m=+191.504792304" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.212439 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.212522 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.477658 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.477734 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.621729 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.621842 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656506 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:28:38 crc kubenswrapper[4847]: E0218 00:28:38.656787 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17b0366-f840-4bce-96cb-bb8e90eaf4fa" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656805 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17b0366-f840-4bce-96cb-bb8e90eaf4fa" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: E0218 00:28:38.656827 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0bb0c0-dce7-448a-99a2-b33c10f4288d" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656836 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0bb0c0-dce7-448a-99a2-b33c10f4288d" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: E0218 00:28:38.656852 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d9fff8-b1cd-4124-8dc8-607c81e15c21" containerName="image-pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656863 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d9fff8-b1cd-4124-8dc8-607c81e15c21" containerName="image-pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656978 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17b0366-f840-4bce-96cb-bb8e90eaf4fa" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.656994 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0bb0c0-dce7-448a-99a2-b33c10f4288d" containerName="pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.657006 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d9fff8-b1cd-4124-8dc8-607c81e15c21" containerName="image-pruner" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.657431 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.663309 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.663667 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.700719 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.700997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.704491 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.737731 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.802409 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.802464 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.802658 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.815445 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:28:38 crc kubenswrapper[4847]: I0218 00:28:38.846809 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.017948 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.218001 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-px9xt" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="registry-server" probeResult="failure" output=< Feb 18 00:28:39 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:28:39 crc kubenswrapper[4847]: > Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.252493 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tqxr4" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="registry-server" probeResult="failure" output=< Feb 18 00:28:39 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:28:39 crc kubenswrapper[4847]: > Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.517453 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hmdff" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="registry-server" probeResult="failure" output=< Feb 18 00:28:39 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:28:39 crc kubenswrapper[4847]: > Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.542699 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 18 00:28:39 crc kubenswrapper[4847]: I0218 00:28:39.997947 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"123f20e4-2b3f-423f-95d2-ac6e4e5eb850","Type":"ContainerStarted","Data":"873e189e18ed357499fd9ce7e7525d2af1229d67594af425de8c6342ee9100da"} Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.178259 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.178319 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.226930 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.583163 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.583209 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:40 crc kubenswrapper[4847]: I0218 00:28:40.645089 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.005156 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"123f20e4-2b3f-423f-95d2-ac6e4e5eb850","Type":"ContainerStarted","Data":"2fdced281bdfa7283c486a1c9a7e23b75fc8f560e7565a8444a5c156bcfe0764"} Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.022839 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.022818425 podStartE2EDuration="3.022818425s" podCreationTimestamp="2026-02-18 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:28:41.021664797 +0000 UTC m=+194.399015739" watchObservedRunningTime="2026-02-18 00:28:41.022818425 +0000 UTC m=+194.400169377" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.063534 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.065664 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.252839 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.252912 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.603740 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:41 crc kubenswrapper[4847]: I0218 00:28:41.603784 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:42 crc kubenswrapper[4847]: I0218 00:28:42.011553 4847 generic.go:334] "Generic (PLEG): container finished" podID="123f20e4-2b3f-423f-95d2-ac6e4e5eb850" containerID="2fdced281bdfa7283c486a1c9a7e23b75fc8f560e7565a8444a5c156bcfe0764" exitCode=0 Feb 18 00:28:42 crc kubenswrapper[4847]: I0218 00:28:42.011737 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"123f20e4-2b3f-423f-95d2-ac6e4e5eb850","Type":"ContainerDied","Data":"2fdced281bdfa7283c486a1c9a7e23b75fc8f560e7565a8444a5c156bcfe0764"} Feb 18 00:28:42 crc kubenswrapper[4847]: I0218 00:28:42.290348 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lr9xc" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="registry-server" probeResult="failure" output=< Feb 18 00:28:42 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:28:42 crc kubenswrapper[4847]: > Feb 18 00:28:42 crc kubenswrapper[4847]: I0218 00:28:42.654566 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qprjq" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="registry-server" probeResult="failure" output=< Feb 18 00:28:42 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:28:42 crc kubenswrapper[4847]: > Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.305369 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.431512 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access\") pod \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.431698 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir\") pod \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\" (UID: \"123f20e4-2b3f-423f-95d2-ac6e4e5eb850\") " Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.431755 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "123f20e4-2b3f-423f-95d2-ac6e4e5eb850" (UID: "123f20e4-2b3f-423f-95d2-ac6e4e5eb850"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.432093 4847 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.438549 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "123f20e4-2b3f-423f-95d2-ac6e4e5eb850" (UID: "123f20e4-2b3f-423f-95d2-ac6e4e5eb850"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.451470 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.451955 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jcs2d" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="registry-server" containerID="cri-o://82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970" gracePeriod=2 Feb 18 00:28:43 crc kubenswrapper[4847]: I0218 00:28:43.533106 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/123f20e4-2b3f-423f-95d2-ac6e4e5eb850-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.026311 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"123f20e4-2b3f-423f-95d2-ac6e4e5eb850","Type":"ContainerDied","Data":"873e189e18ed357499fd9ce7e7525d2af1229d67594af425de8c6342ee9100da"} Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.026361 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="873e189e18ed357499fd9ce7e7525d2af1229d67594af425de8c6342ee9100da" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.026379 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.456338 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:28:44 crc kubenswrapper[4847]: E0218 00:28:44.457204 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123f20e4-2b3f-423f-95d2-ac6e4e5eb850" containerName="pruner" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.457228 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="123f20e4-2b3f-423f-95d2-ac6e4e5eb850" containerName="pruner" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.457380 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="123f20e4-2b3f-423f-95d2-ac6e4e5eb850" containerName="pruner" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.457959 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.461939 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.461948 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.469116 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.579975 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.580032 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.580057 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.681632 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.681700 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.681724 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.681756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.681822 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.708383 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access\") pod \"installer-9-crc\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.764541 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.775108 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.884379 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities\") pod \"23c01b00-fb74-42f7-8a1a-343e78623f37\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.884556 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content\") pod \"23c01b00-fb74-42f7-8a1a-343e78623f37\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.884686 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcqpk\" (UniqueName: \"kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk\") pod \"23c01b00-fb74-42f7-8a1a-343e78623f37\" (UID: \"23c01b00-fb74-42f7-8a1a-343e78623f37\") " Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.885829 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities" (OuterVolumeSpecName: "utilities") pod "23c01b00-fb74-42f7-8a1a-343e78623f37" (UID: "23c01b00-fb74-42f7-8a1a-343e78623f37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.904826 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk" (OuterVolumeSpecName: "kube-api-access-vcqpk") pod "23c01b00-fb74-42f7-8a1a-343e78623f37" (UID: "23c01b00-fb74-42f7-8a1a-343e78623f37"). InnerVolumeSpecName "kube-api-access-vcqpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.913786 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23c01b00-fb74-42f7-8a1a-343e78623f37" (UID: "23c01b00-fb74-42f7-8a1a-343e78623f37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.986920 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.987378 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcqpk\" (UniqueName: \"kubernetes.io/projected/23c01b00-fb74-42f7-8a1a-343e78623f37-kube-api-access-vcqpk\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:44 crc kubenswrapper[4847]: I0218 00:28:44.987400 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23c01b00-fb74-42f7-8a1a-343e78623f37-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.036559 4847 generic.go:334] "Generic (PLEG): container finished" podID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerID="82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970" exitCode=0 Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.036653 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerDied","Data":"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970"} Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.036764 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jcs2d" event={"ID":"23c01b00-fb74-42f7-8a1a-343e78623f37","Type":"ContainerDied","Data":"5e206661476bfc65daa8c05e69aa7464a2db76c311407b92bcbb7ec93e7ead9c"} Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.036792 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jcs2d" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.036812 4847 scope.go:117] "RemoveContainer" containerID="82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.066091 4847 scope.go:117] "RemoveContainer" containerID="ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.080546 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.083111 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jcs2d"] Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.102480 4847 scope.go:117] "RemoveContainer" containerID="10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.128282 4847 scope.go:117] "RemoveContainer" containerID="82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970" Feb 18 00:28:45 crc kubenswrapper[4847]: E0218 00:28:45.128712 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970\": container with ID starting with 82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970 not found: ID does not exist" containerID="82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.128760 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970"} err="failed to get container status \"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970\": rpc error: code = NotFound desc = could not find container \"82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970\": container with ID starting with 82ee802ac0d5725d4869e1b22e7338ce3e894e6b467ca1f86d9187261a3c4970 not found: ID does not exist" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.128867 4847 scope.go:117] "RemoveContainer" containerID="ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d" Feb 18 00:28:45 crc kubenswrapper[4847]: E0218 00:28:45.129332 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d\": container with ID starting with ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d not found: ID does not exist" containerID="ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.129362 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d"} err="failed to get container status \"ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d\": rpc error: code = NotFound desc = could not find container \"ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d\": container with ID starting with ce22a845211a9a2c8b193b1620a9a1e45a2576e4e938da28a3337a262ba8ba6d not found: ID does not exist" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.129380 4847 scope.go:117] "RemoveContainer" containerID="10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d" Feb 18 00:28:45 crc kubenswrapper[4847]: E0218 00:28:45.129671 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d\": container with ID starting with 10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d not found: ID does not exist" containerID="10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.129691 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d"} err="failed to get container status \"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d\": rpc error: code = NotFound desc = could not find container \"10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d\": container with ID starting with 10d173d479729635d06cd58ffffa6ed257810b14671ed25a1f5dac94ff8c281d not found: ID does not exist" Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.258408 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 18 00:28:45 crc kubenswrapper[4847]: W0218 00:28:45.274032 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7fb04bef_d533_4b90_9c2e_71d6a27ce5a5.slice/crio-b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3 WatchSource:0}: Error finding container b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3: Status 404 returned error can't find the container with id b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3 Feb 18 00:28:45 crc kubenswrapper[4847]: I0218 00:28:45.417335 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" path="/var/lib/kubelet/pods/23c01b00-fb74-42f7-8a1a-343e78623f37/volumes" Feb 18 00:28:46 crc kubenswrapper[4847]: I0218 00:28:46.049433 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5","Type":"ContainerStarted","Data":"40b62006ae47e8455ffa52bd56dc74a74a5ef71a338b1328dbdf8f5cfafc85ca"} Feb 18 00:28:46 crc kubenswrapper[4847]: I0218 00:28:46.049509 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5","Type":"ContainerStarted","Data":"b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3"} Feb 18 00:28:46 crc kubenswrapper[4847]: I0218 00:28:46.080301 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.080272702 podStartE2EDuration="2.080272702s" podCreationTimestamp="2026-02-18 00:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:28:46.078462498 +0000 UTC m=+199.455813480" watchObservedRunningTime="2026-02-18 00:28:46.080272702 +0000 UTC m=+199.457623654" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.130391 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.194171 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.277223 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.325895 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.515037 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.592371 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:48 crc kubenswrapper[4847]: I0218 00:28:48.663132 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:49 crc kubenswrapper[4847]: I0218 00:28:49.252112 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.089090 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hmdff" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="registry-server" containerID="cri-o://70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39" gracePeriod=2 Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.499304 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.595710 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities\") pod \"40f2a712-6701-4c22-94c2-6a644742459b\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.595808 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content\") pod \"40f2a712-6701-4c22-94c2-6a644742459b\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.596020 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhvrj\" (UniqueName: \"kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj\") pod \"40f2a712-6701-4c22-94c2-6a644742459b\" (UID: \"40f2a712-6701-4c22-94c2-6a644742459b\") " Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.597316 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities" (OuterVolumeSpecName: "utilities") pod "40f2a712-6701-4c22-94c2-6a644742459b" (UID: "40f2a712-6701-4c22-94c2-6a644742459b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.609719 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj" (OuterVolumeSpecName: "kube-api-access-jhvrj") pod "40f2a712-6701-4c22-94c2-6a644742459b" (UID: "40f2a712-6701-4c22-94c2-6a644742459b"). InnerVolumeSpecName "kube-api-access-jhvrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.656436 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.658045 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ckhs7" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="registry-server" containerID="cri-o://158b1aa6fdc6be9abadd1dbbf255249e5b1875d921952319b76e99629704d10a" gracePeriod=2 Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.668350 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40f2a712-6701-4c22-94c2-6a644742459b" (UID: "40f2a712-6701-4c22-94c2-6a644742459b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.697824 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhvrj\" (UniqueName: \"kubernetes.io/projected/40f2a712-6701-4c22-94c2-6a644742459b-kube-api-access-jhvrj\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.697870 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:50 crc kubenswrapper[4847]: I0218 00:28:50.697882 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40f2a712-6701-4c22-94c2-6a644742459b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.100011 4847 generic.go:334] "Generic (PLEG): container finished" podID="15a457ff-fd78-446b-85ca-acd23651863f" containerID="158b1aa6fdc6be9abadd1dbbf255249e5b1875d921952319b76e99629704d10a" exitCode=0 Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.100087 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerDied","Data":"158b1aa6fdc6be9abadd1dbbf255249e5b1875d921952319b76e99629704d10a"} Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.100123 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckhs7" event={"ID":"15a457ff-fd78-446b-85ca-acd23651863f","Type":"ContainerDied","Data":"1f70ba2f62c676c4dc49b8a7ed5cd9239ac348919de96315503917f3596374ce"} Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.100136 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f70ba2f62c676c4dc49b8a7ed5cd9239ac348919de96315503917f3596374ce" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.103835 4847 generic.go:334] "Generic (PLEG): container finished" podID="40f2a712-6701-4c22-94c2-6a644742459b" containerID="70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39" exitCode=0 Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.103902 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerDied","Data":"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39"} Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.103969 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hmdff" event={"ID":"40f2a712-6701-4c22-94c2-6a644742459b","Type":"ContainerDied","Data":"829f7a68396de8dee18a6d8dfad575a3917148a0e9db3169cb6fb52f4061fb1b"} Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.103988 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hmdff" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.104019 4847 scope.go:117] "RemoveContainer" containerID="70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.104587 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.132115 4847 scope.go:117] "RemoveContainer" containerID="531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.208050 4847 scope.go:117] "RemoveContainer" containerID="07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.212037 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content\") pod \"15a457ff-fd78-446b-85ca-acd23651863f\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.212099 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l\") pod \"15a457ff-fd78-446b-85ca-acd23651863f\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.212243 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities\") pod \"15a457ff-fd78-446b-85ca-acd23651863f\" (UID: \"15a457ff-fd78-446b-85ca-acd23651863f\") " Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.216763 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l" (OuterVolumeSpecName: "kube-api-access-ft77l") pod "15a457ff-fd78-446b-85ca-acd23651863f" (UID: "15a457ff-fd78-446b-85ca-acd23651863f"). InnerVolumeSpecName "kube-api-access-ft77l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.217253 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities" (OuterVolumeSpecName: "utilities") pod "15a457ff-fd78-446b-85ca-acd23651863f" (UID: "15a457ff-fd78-446b-85ca-acd23651863f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.219997 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.221722 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hmdff"] Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.263861 4847 scope.go:117] "RemoveContainer" containerID="70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39" Feb 18 00:28:51 crc kubenswrapper[4847]: E0218 00:28:51.264472 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39\": container with ID starting with 70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39 not found: ID does not exist" containerID="70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.264514 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39"} err="failed to get container status \"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39\": rpc error: code = NotFound desc = could not find container \"70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39\": container with ID starting with 70ba8a1683986b4b65c89ef4158067b6dd4db3421cd1492e502954c84f059f39 not found: ID does not exist" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.264543 4847 scope.go:117] "RemoveContainer" containerID="531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6" Feb 18 00:28:51 crc kubenswrapper[4847]: E0218 00:28:51.265557 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6\": container with ID starting with 531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6 not found: ID does not exist" containerID="531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.265638 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6"} err="failed to get container status \"531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6\": rpc error: code = NotFound desc = could not find container \"531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6\": container with ID starting with 531ac520c2ca8d84316f32d1cae35c397d0d9e17e82c247d19533bd43b9ad1b6 not found: ID does not exist" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.265683 4847 scope.go:117] "RemoveContainer" containerID="07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6" Feb 18 00:28:51 crc kubenswrapper[4847]: E0218 00:28:51.266052 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6\": container with ID starting with 07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6 not found: ID does not exist" containerID="07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.266124 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6"} err="failed to get container status \"07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6\": rpc error: code = NotFound desc = could not find container \"07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6\": container with ID starting with 07bb396ab0a22a3d1163c70e3034b646b8e8cfbb5c8289208ad9c40ff356a3b6 not found: ID does not exist" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.271877 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15a457ff-fd78-446b-85ca-acd23651863f" (UID: "15a457ff-fd78-446b-85ca-acd23651863f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.297025 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.316644 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft77l\" (UniqueName: \"kubernetes.io/projected/15a457ff-fd78-446b-85ca-acd23651863f-kube-api-access-ft77l\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.316694 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.316712 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15a457ff-fd78-446b-85ca-acd23651863f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.341218 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.411723 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f2a712-6701-4c22-94c2-6a644742459b" path="/var/lib/kubelet/pods/40f2a712-6701-4c22-94c2-6a644742459b/volumes" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.650533 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:51 crc kubenswrapper[4847]: I0218 00:28:51.698205 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:52 crc kubenswrapper[4847]: I0218 00:28:52.114025 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckhs7" Feb 18 00:28:52 crc kubenswrapper[4847]: I0218 00:28:52.139889 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:28:52 crc kubenswrapper[4847]: I0218 00:28:52.146852 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ckhs7"] Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.412947 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a457ff-fd78-446b-85ca-acd23651863f" path="/var/lib/kubelet/pods/15a457ff-fd78-446b-85ca-acd23651863f/volumes" Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.493378 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.493468 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.493544 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.494431 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:28:53 crc kubenswrapper[4847]: I0218 00:28:53.494549 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831" gracePeriod=600 Feb 18 00:28:54 crc kubenswrapper[4847]: I0218 00:28:54.131447 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831" exitCode=0 Feb 18 00:28:54 crc kubenswrapper[4847]: I0218 00:28:54.131576 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831"} Feb 18 00:28:54 crc kubenswrapper[4847]: I0218 00:28:54.132083 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75"} Feb 18 00:28:55 crc kubenswrapper[4847]: I0218 00:28:55.654195 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:55 crc kubenswrapper[4847]: I0218 00:28:55.654830 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qprjq" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="registry-server" containerID="cri-o://6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4" gracePeriod=2 Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.062045 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.148473 4847 generic.go:334] "Generic (PLEG): container finished" podID="4419c48a-0a19-486b-ad17-b88461b9377b" containerID="6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4" exitCode=0 Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.148534 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerDied","Data":"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4"} Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.148568 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qprjq" event={"ID":"4419c48a-0a19-486b-ad17-b88461b9377b","Type":"ContainerDied","Data":"90d51cdfdd3e191167ad84d5377d747742e93232e8c72a76ee02f4489965dd9e"} Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.148587 4847 scope.go:117] "RemoveContainer" containerID="6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.148581 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qprjq" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.173233 4847 scope.go:117] "RemoveContainer" containerID="ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.194456 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities\") pod \"4419c48a-0a19-486b-ad17-b88461b9377b\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.194498 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content\") pod \"4419c48a-0a19-486b-ad17-b88461b9377b\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.194630 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnv5g\" (UniqueName: \"kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g\") pod \"4419c48a-0a19-486b-ad17-b88461b9377b\" (UID: \"4419c48a-0a19-486b-ad17-b88461b9377b\") " Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.196735 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities" (OuterVolumeSpecName: "utilities") pod "4419c48a-0a19-486b-ad17-b88461b9377b" (UID: "4419c48a-0a19-486b-ad17-b88461b9377b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.198586 4847 scope.go:117] "RemoveContainer" containerID="60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.204150 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g" (OuterVolumeSpecName: "kube-api-access-mnv5g") pod "4419c48a-0a19-486b-ad17-b88461b9377b" (UID: "4419c48a-0a19-486b-ad17-b88461b9377b"). InnerVolumeSpecName "kube-api-access-mnv5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.242057 4847 scope.go:117] "RemoveContainer" containerID="6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4" Feb 18 00:28:56 crc kubenswrapper[4847]: E0218 00:28:56.243884 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4\": container with ID starting with 6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4 not found: ID does not exist" containerID="6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.244016 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4"} err="failed to get container status \"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4\": rpc error: code = NotFound desc = could not find container \"6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4\": container with ID starting with 6802f9506077822d6da5ed3fc1506bf6e7c9e418eafc271621bc4cd197f813b4 not found: ID does not exist" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.244136 4847 scope.go:117] "RemoveContainer" containerID="ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360" Feb 18 00:28:56 crc kubenswrapper[4847]: E0218 00:28:56.245994 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360\": container with ID starting with ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360 not found: ID does not exist" containerID="ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.246024 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360"} err="failed to get container status \"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360\": rpc error: code = NotFound desc = could not find container \"ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360\": container with ID starting with ca840788d5e2cec5bc834eab43dfc9b4a2c7cfd1eb5b2c4e260d0e425621f360 not found: ID does not exist" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.246043 4847 scope.go:117] "RemoveContainer" containerID="60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216" Feb 18 00:28:56 crc kubenswrapper[4847]: E0218 00:28:56.246483 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216\": container with ID starting with 60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216 not found: ID does not exist" containerID="60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.246577 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216"} err="failed to get container status \"60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216\": rpc error: code = NotFound desc = could not find container \"60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216\": container with ID starting with 60d6432237f77988aa3bbd332b483f1b534623ba7c90222707d18469efc2a216 not found: ID does not exist" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.296529 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnv5g\" (UniqueName: \"kubernetes.io/projected/4419c48a-0a19-486b-ad17-b88461b9377b-kube-api-access-mnv5g\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.296571 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.339489 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4419c48a-0a19-486b-ad17-b88461b9377b" (UID: "4419c48a-0a19-486b-ad17-b88461b9377b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.398302 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4419c48a-0a19-486b-ad17-b88461b9377b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.492656 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:56 crc kubenswrapper[4847]: I0218 00:28:56.498286 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qprjq"] Feb 18 00:28:57 crc kubenswrapper[4847]: I0218 00:28:57.415337 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" path="/var/lib/kubelet/pods/4419c48a-0a19-486b-ad17-b88461b9377b/volumes" Feb 18 00:29:03 crc kubenswrapper[4847]: I0218 00:29:03.855163 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" containerID="cri-o://96a495bf2c9adb1962cd35cf5fe155423f82659af87ceee478f2ff2291b7bd6f" gracePeriod=15 Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.206215 4847 generic.go:334] "Generic (PLEG): container finished" podID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerID="96a495bf2c9adb1962cd35cf5fe155423f82659af87ceee478f2ff2291b7bd6f" exitCode=0 Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.206299 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" event={"ID":"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c","Type":"ContainerDied","Data":"96a495bf2c9adb1962cd35cf5fe155423f82659af87ceee478f2ff2291b7bd6f"} Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.206874 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" event={"ID":"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c","Type":"ContainerDied","Data":"720e421c5e1d3176ba34a5df13ee796366d1ca5dbc664fb3e722ce016361237d"} Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.206907 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="720e421c5e1d3176ba34a5df13ee796366d1ca5dbc664fb3e722ce016361237d" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.238798 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325066 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325110 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325146 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325204 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325240 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325316 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325319 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325360 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fkl\" (UniqueName: \"kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325387 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325421 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325446 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325474 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325504 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325560 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325584 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert\") pod \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\" (UID: \"c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c\") " Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.325836 4847 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.326263 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.326289 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.326432 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.327618 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.333692 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.334388 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.334720 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.334778 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl" (OuterVolumeSpecName: "kube-api-access-z5fkl") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "kube-api-access-z5fkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.335065 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.335847 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.336694 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.336914 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.337287 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" (UID: "c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427057 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427098 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427114 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427128 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427140 4847 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427150 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427162 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5fkl\" (UniqueName: \"kubernetes.io/projected/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-kube-api-access-z5fkl\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427172 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427182 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427199 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427210 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427222 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:04 crc kubenswrapper[4847]: I0218 00:29:04.427232 4847 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:05 crc kubenswrapper[4847]: I0218 00:29:05.211290 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xk7s7" Feb 18 00:29:05 crc kubenswrapper[4847]: I0218 00:29:05.243532 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:29:05 crc kubenswrapper[4847]: I0218 00:29:05.254410 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xk7s7"] Feb 18 00:29:05 crc kubenswrapper[4847]: I0218 00:29:05.418620 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" path="/var/lib/kubelet/pods/c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c/volumes" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.496847 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-qxf65"] Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497570 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497584 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497611 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497618 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497627 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497632 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497639 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497644 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="extract-content" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497660 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497665 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497674 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497680 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497688 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497693 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497700 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497705 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497713 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497718 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497728 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497734 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497744 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497750 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497756 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497761 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: E0218 00:29:09.497769 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497775 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="extract-utilities" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497867 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f2a712-6701-4c22-94c2-6a644742459b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497884 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4419c48a-0a19-486b-ad17-b88461b9377b" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497893 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c01b00-fb74-42f7-8a1a-343e78623f37" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497903 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a457ff-fd78-446b-85ca-acd23651863f" containerName="registry-server" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.497911 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7ee003d-8f4d-4fdb-96f5-4d533ea17b3c" containerName="oauth-openshift" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.498329 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.501426 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.503079 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.503358 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.503499 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.503823 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.504112 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.504235 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.504362 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.504466 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.504873 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.505040 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.506555 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.511371 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.513875 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.517861 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.551808 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-qxf65"] Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599038 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599098 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599122 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599269 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-audit-policies\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599327 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599384 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599407 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599508 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599579 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599687 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rksvb\" (UniqueName: \"kubernetes.io/projected/8eaa1725-bc13-4840-9373-e92afe719200-kube-api-access-rksvb\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599735 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599780 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eaa1725-bc13-4840-9373-e92afe719200-audit-dir\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599821 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.599862 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.700897 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rksvb\" (UniqueName: \"kubernetes.io/projected/8eaa1725-bc13-4840-9373-e92afe719200-kube-api-access-rksvb\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.700951 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.700977 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eaa1725-bc13-4840-9373-e92afe719200-audit-dir\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701014 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701045 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701066 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701077 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eaa1725-bc13-4840-9373-e92afe719200-audit-dir\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701109 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701143 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-audit-policies\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701165 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701194 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701215 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701239 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.701267 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.702348 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-audit-policies\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.702443 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.703097 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-service-ca\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.703130 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-cliconfig\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.707771 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-router-certs\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.707841 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-error\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.708116 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.708796 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-session\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.708925 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-serving-cert\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.709156 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.710081 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.710456 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8eaa1725-bc13-4840-9373-e92afe719200-v4-0-config-user-template-login\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.720054 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rksvb\" (UniqueName: \"kubernetes.io/projected/8eaa1725-bc13-4840-9373-e92afe719200-kube-api-access-rksvb\") pod \"oauth-openshift-bd7987fd5-qxf65\" (UID: \"8eaa1725-bc13-4840-9373-e92afe719200\") " pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:09 crc kubenswrapper[4847]: I0218 00:29:09.815868 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:10 crc kubenswrapper[4847]: I0218 00:29:10.287938 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-bd7987fd5-qxf65"] Feb 18 00:29:10 crc kubenswrapper[4847]: W0218 00:29:10.295548 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eaa1725_bc13_4840_9373_e92afe719200.slice/crio-7641ae6dd18a08dca0f4158276b6a4057caa7f9c2db49a2d72153d725b281496 WatchSource:0}: Error finding container 7641ae6dd18a08dca0f4158276b6a4057caa7f9c2db49a2d72153d725b281496: Status 404 returned error can't find the container with id 7641ae6dd18a08dca0f4158276b6a4057caa7f9c2db49a2d72153d725b281496 Feb 18 00:29:11 crc kubenswrapper[4847]: I0218 00:29:11.256435 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" event={"ID":"8eaa1725-bc13-4840-9373-e92afe719200","Type":"ContainerStarted","Data":"f5b4808e5c722baeb4a0defb82c01aaf9c4d82f9c930a6f4c92ae082854d4452"} Feb 18 00:29:11 crc kubenswrapper[4847]: I0218 00:29:11.257343 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" event={"ID":"8eaa1725-bc13-4840-9373-e92afe719200","Type":"ContainerStarted","Data":"7641ae6dd18a08dca0f4158276b6a4057caa7f9c2db49a2d72153d725b281496"} Feb 18 00:29:11 crc kubenswrapper[4847]: I0218 00:29:11.257362 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:11 crc kubenswrapper[4847]: I0218 00:29:11.268175 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" Feb 18 00:29:11 crc kubenswrapper[4847]: I0218 00:29:11.280681 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-bd7987fd5-qxf65" podStartSLOduration=33.280658616 podStartE2EDuration="33.280658616s" podCreationTimestamp="2026-02-18 00:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:29:11.278921054 +0000 UTC m=+224.656271996" watchObservedRunningTime="2026-02-18 00:29:11.280658616 +0000 UTC m=+224.658009618" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.484072 4847 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.485008 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52" gracePeriod=15 Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.485035 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f" gracePeriod=15 Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.485084 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279" gracePeriod=15 Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.485133 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29" gracePeriod=15 Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.485261 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9" gracePeriod=15 Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486431 4847 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486804 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486820 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486836 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486854 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486862 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486883 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486894 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486900 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486917 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486924 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.486931 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.486940 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.487082 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.487099 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.487108 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.487120 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.487135 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 18 00:29:23 crc kubenswrapper[4847]: E0218 00:29:23.489039 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.489058 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.489159 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.491020 4847 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.492426 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.500748 4847 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610238 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610283 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610308 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610362 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610384 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610399 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610647 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.610696 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712281 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712394 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712424 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712445 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712479 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712499 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712526 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712559 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712547 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712620 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712633 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712613 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712658 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712571 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712682 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:23 crc kubenswrapper[4847]: I0218 00:29:23.712678 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.357542 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.359567 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.360543 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f" exitCode=0 Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.360592 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9" exitCode=0 Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.360641 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279" exitCode=0 Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.360656 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29" exitCode=2 Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.360674 4847 scope.go:117] "RemoveContainer" containerID="a5085cf77669f20fa96fa49953a3cec5b5b3f12b162a8f6ee7cf167e15e70832" Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.363123 4847 generic.go:334] "Generic (PLEG): container finished" podID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" containerID="40b62006ae47e8455ffa52bd56dc74a74a5ef71a338b1328dbdf8f5cfafc85ca" exitCode=0 Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.363206 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5","Type":"ContainerDied","Data":"40b62006ae47e8455ffa52bd56dc74a74a5ef71a338b1328dbdf8f5cfafc85ca"} Feb 18 00:29:24 crc kubenswrapper[4847]: I0218 00:29:24.364763 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.373897 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.787103 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.788484 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.854659 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access\") pod \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.854750 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir\") pod \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.854835 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock\") pod \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\" (UID: \"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5\") " Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.854956 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" (UID: "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.855154 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock" (OuterVolumeSpecName: "var-lock") pod "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" (UID: "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.855752 4847 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.855800 4847 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.862409 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" (UID: "7fb04bef-d533-4b90-9c2e-71d6a27ce5a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.947683 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.948836 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.949648 4847 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.950362 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:25 crc kubenswrapper[4847]: I0218 00:29:25.957090 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fb04bef-d533-4b90-9c2e-71d6a27ce5a5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058699 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058808 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058835 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058841 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058974 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.058997 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.059153 4847 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.059167 4847 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.059176 4847 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.385866 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.386811 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52" exitCode=0 Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.386882 4847 scope.go:117] "RemoveContainer" containerID="1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.386883 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.390123 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.390080 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7fb04bef-d533-4b90-9c2e-71d6a27ce5a5","Type":"ContainerDied","Data":"b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3"} Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.390279 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1649f2aa4ba6d40ebaf5b58ba0e4db105f7488518948516ff8e89fe048c71c3" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.402537 4847 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.403139 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.405382 4847 scope.go:117] "RemoveContainer" containerID="09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.408118 4847 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.408490 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.423644 4847 scope.go:117] "RemoveContainer" containerID="61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.442328 4847 scope.go:117] "RemoveContainer" containerID="cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.462615 4847 scope.go:117] "RemoveContainer" containerID="2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.484471 4847 scope.go:117] "RemoveContainer" containerID="6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.501906 4847 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" volumeName="registry-storage" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.512622 4847 scope.go:117] "RemoveContainer" containerID="1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.513100 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\": container with ID starting with 1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f not found: ID does not exist" containerID="1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.513142 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f"} err="failed to get container status \"1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\": rpc error: code = NotFound desc = could not find container \"1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f\": container with ID starting with 1657025b7394b068048970fdf07e24f3e1dda44f42ca872bb3cc526e3d65cd6f not found: ID does not exist" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.513170 4847 scope.go:117] "RemoveContainer" containerID="09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.513855 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\": container with ID starting with 09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9 not found: ID does not exist" containerID="09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.513885 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9"} err="failed to get container status \"09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\": rpc error: code = NotFound desc = could not find container \"09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9\": container with ID starting with 09e3a2dfc4f5a7deb9d4539ecf41c1e19ac7f1b62e276b07b171c1e44338ace9 not found: ID does not exist" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.513914 4847 scope.go:117] "RemoveContainer" containerID="61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.514493 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\": container with ID starting with 61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279 not found: ID does not exist" containerID="61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.514539 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279"} err="failed to get container status \"61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\": rpc error: code = NotFound desc = could not find container \"61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279\": container with ID starting with 61186707cc4846bf5efeeffd61167be6eaaa1472a57bf65a02cc11b938fdb279 not found: ID does not exist" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.514572 4847 scope.go:117] "RemoveContainer" containerID="cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.514986 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\": container with ID starting with cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29 not found: ID does not exist" containerID="cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.515021 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29"} err="failed to get container status \"cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\": rpc error: code = NotFound desc = could not find container \"cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29\": container with ID starting with cd97e33c5fd3997cbe62c577b5f24157d2a187fbc1cbe03e6caecc472c74fe29 not found: ID does not exist" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.515042 4847 scope.go:117] "RemoveContainer" containerID="2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.515617 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\": container with ID starting with 2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52 not found: ID does not exist" containerID="2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.515645 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52"} err="failed to get container status \"2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\": rpc error: code = NotFound desc = could not find container \"2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52\": container with ID starting with 2842318f59046b237b0e79b823bc3ffdebbcf837024ddfce5f223a4734a47e52 not found: ID does not exist" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.515663 4847 scope.go:117] "RemoveContainer" containerID="6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1" Feb 18 00:29:26 crc kubenswrapper[4847]: E0218 00:29:26.515920 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\": container with ID starting with 6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1 not found: ID does not exist" containerID="6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1" Feb 18 00:29:26 crc kubenswrapper[4847]: I0218 00:29:26.515936 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1"} err="failed to get container status \"6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\": rpc error: code = NotFound desc = could not find container \"6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1\": container with ID starting with 6b8bac079bcfdcd94cb291f0c07a7bf989e252914c36d315dd0c259c257001c1 not found: ID does not exist" Feb 18 00:29:27 crc kubenswrapper[4847]: I0218 00:29:27.411133 4847 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: I0218 00:29:27.413165 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: I0218 00:29:27.429926 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.732094 4847 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.732840 4847 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.733693 4847 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.734194 4847 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.734574 4847 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:27 crc kubenswrapper[4847]: I0218 00:29:27.734663 4847 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.735101 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Feb 18 00:29:27 crc kubenswrapper[4847]: E0218 00:29:27.938018 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Feb 18 00:29:28 crc kubenswrapper[4847]: E0218 00:29:28.339504 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Feb 18 00:29:28 crc kubenswrapper[4847]: E0218 00:29:28.542489 4847 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:28 crc kubenswrapper[4847]: I0218 00:29:28.542988 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:28 crc kubenswrapper[4847]: E0218 00:29:28.568857 4847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952fc8f3ac3455 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:29:28.568271957 +0000 UTC m=+241.945622899,LastTimestamp:2026-02-18 00:29:28.568271957 +0000 UTC m=+241.945622899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:29:29 crc kubenswrapper[4847]: E0218 00:29:29.140771 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Feb 18 00:29:29 crc kubenswrapper[4847]: I0218 00:29:29.418247 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479"} Feb 18 00:29:29 crc kubenswrapper[4847]: I0218 00:29:29.418885 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"24713ba4734b1448fe3c5419e39a9781ae4e80121ca1e275c21a6309829f64ab"} Feb 18 00:29:29 crc kubenswrapper[4847]: I0218 00:29:29.419723 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:29 crc kubenswrapper[4847]: E0218 00:29:29.420209 4847 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:30 crc kubenswrapper[4847]: E0218 00:29:30.426853 4847 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:29:30 crc kubenswrapper[4847]: E0218 00:29:30.742391 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Feb 18 00:29:33 crc kubenswrapper[4847]: E0218 00:29:33.944121 4847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Feb 18 00:29:34 crc kubenswrapper[4847]: I0218 00:29:34.403930 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:34 crc kubenswrapper[4847]: I0218 00:29:34.405552 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:34 crc kubenswrapper[4847]: I0218 00:29:34.425072 4847 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:34 crc kubenswrapper[4847]: I0218 00:29:34.425114 4847 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:34 crc kubenswrapper[4847]: E0218 00:29:34.425517 4847 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:34 crc kubenswrapper[4847]: I0218 00:29:34.426139 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.462580 4847 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9c5d01a0862293fc69b3acb306ae17e4527cc845fbc55d82b2c425e40951f48c" exitCode=0 Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.462795 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9c5d01a0862293fc69b3acb306ae17e4527cc845fbc55d82b2c425e40951f48c"} Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.463231 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd37ed37ada5590a4606d1091fff7369c52f771c6cd059361f5447ff7645d79e"} Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.463936 4847 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.463975 4847 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:35 crc kubenswrapper[4847]: E0218 00:29:35.464791 4847 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:35 crc kubenswrapper[4847]: I0218 00:29:35.464809 4847 status_manager.go:851] "Failed to get status for pod" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Feb 18 00:29:35 crc kubenswrapper[4847]: E0218 00:29:35.675556 4847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952fc8f3ac3455 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:29:28.568271957 +0000 UTC m=+241.945622899,LastTimestamp:2026-02-18 00:29:28.568271957 +0000 UTC m=+241.945622899,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:29:36 crc kubenswrapper[4847]: I0218 00:29:36.477194 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd1b4f1389e4f7378ccd249cde09f35258664593eae37a33214a748860c2bbe5"} Feb 18 00:29:36 crc kubenswrapper[4847]: I0218 00:29:36.477757 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5cf1956b50b5fbece9b5dc1449973df6e9f942061f0b6b4276e28f879855ff82"} Feb 18 00:29:36 crc kubenswrapper[4847]: I0218 00:29:36.477772 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ebc441cafc83f6753b8f2ed71d46772bae3f57855c4ba97d2eb9b79deed0142"} Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.486318 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.486386 4847 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa" exitCode=1 Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.486451 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa"} Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.487154 4847 scope.go:117] "RemoveContainer" containerID="e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa" Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.490034 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4ea7523813a08b8229f15e637970fedd1eae804cfcc3e16e7d964a2f0a5764a9"} Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.490094 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b669257d453d1f21d489d5947908150b45b8f225cf3d9ab1d31215a60f9bbb5"} Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.490330 4847 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.490348 4847 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.490675 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:37 crc kubenswrapper[4847]: I0218 00:29:37.501031 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:29:38 crc kubenswrapper[4847]: I0218 00:29:38.499653 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 00:29:38 crc kubenswrapper[4847]: I0218 00:29:38.500005 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21ff0a004dfd85a8d7b1d1b806b95894f90fe652d3a2ab936836e9b81a7c4fd8"} Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.426571 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.426671 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.433642 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.785715 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.786013 4847 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 00:29:39 crc kubenswrapper[4847]: I0218 00:29:39.786114 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 00:29:42 crc kubenswrapper[4847]: I0218 00:29:42.504362 4847 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:42 crc kubenswrapper[4847]: I0218 00:29:42.529940 4847 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:42 crc kubenswrapper[4847]: I0218 00:29:42.529981 4847 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:42 crc kubenswrapper[4847]: I0218 00:29:42.538089 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:29:42 crc kubenswrapper[4847]: I0218 00:29:42.545221 4847 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dfc87ae7-d54d-4b05-9504-1d7c28cb665b" Feb 18 00:29:43 crc kubenswrapper[4847]: I0218 00:29:43.534905 4847 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:43 crc kubenswrapper[4847]: I0218 00:29:43.535409 4847 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08262fa8-b3b6-49f5-b5cd-d9d81dddb06e" Feb 18 00:29:43 crc kubenswrapper[4847]: I0218 00:29:43.935593 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:29:47 crc kubenswrapper[4847]: I0218 00:29:47.435834 4847 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="dfc87ae7-d54d-4b05-9504-1d7c28cb665b" Feb 18 00:29:49 crc kubenswrapper[4847]: I0218 00:29:49.381665 4847 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 18 00:29:49 crc kubenswrapper[4847]: I0218 00:29:49.786850 4847 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 00:29:49 crc kubenswrapper[4847]: I0218 00:29:49.786922 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 00:29:50 crc kubenswrapper[4847]: I0218 00:29:50.447497 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:29:50 crc kubenswrapper[4847]: I0218 00:29:50.810244 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 18 00:29:51 crc kubenswrapper[4847]: I0218 00:29:51.346724 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 18 00:29:51 crc kubenswrapper[4847]: I0218 00:29:51.346892 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 18 00:29:51 crc kubenswrapper[4847]: I0218 00:29:51.386022 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 18 00:29:51 crc kubenswrapper[4847]: I0218 00:29:51.495783 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 18 00:29:51 crc kubenswrapper[4847]: I0218 00:29:51.960389 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 18 00:29:52 crc kubenswrapper[4847]: I0218 00:29:52.406053 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 18 00:29:52 crc kubenswrapper[4847]: I0218 00:29:52.486328 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 18 00:29:52 crc kubenswrapper[4847]: I0218 00:29:52.776572 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 18 00:29:53 crc kubenswrapper[4847]: I0218 00:29:53.594756 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 18 00:29:53 crc kubenswrapper[4847]: I0218 00:29:53.948944 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 18 00:29:54 crc kubenswrapper[4847]: I0218 00:29:54.258251 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 18 00:29:54 crc kubenswrapper[4847]: I0218 00:29:54.323466 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:29:54 crc kubenswrapper[4847]: I0218 00:29:54.454061 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 18 00:29:54 crc kubenswrapper[4847]: I0218 00:29:54.499002 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.022064 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.124350 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.145354 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.235105 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.268393 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.495040 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.666518 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 18 00:29:55 crc kubenswrapper[4847]: I0218 00:29:55.851117 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.039968 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.123175 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.145992 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.267893 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.475718 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.506717 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.547728 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.759378 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.788893 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.798998 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 18 00:29:56 crc kubenswrapper[4847]: I0218 00:29:56.990617 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.043653 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.124982 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.129479 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.140993 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.211836 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.247871 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.319901 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.466433 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.637424 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.870553 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 18 00:29:57 crc kubenswrapper[4847]: I0218 00:29:57.889582 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.082262 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.143108 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.162869 4847 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.463821 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.564779 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.650509 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.654212 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.756668 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.820151 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.881681 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 18 00:29:58 crc kubenswrapper[4847]: I0218 00:29:58.893970 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.024328 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.093878 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.103041 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.117446 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.130236 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.165729 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.186563 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.241582 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.306335 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.332711 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.410062 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.573102 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.656192 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.727113 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.736535 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.757168 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.786469 4847 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.786560 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.786648 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.787419 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"21ff0a004dfd85a8d7b1d1b806b95894f90fe652d3a2ab936836e9b81a7c4fd8"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.787544 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://21ff0a004dfd85a8d7b1d1b806b95894f90fe652d3a2ab936836e9b81a7c4fd8" gracePeriod=30 Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.789071 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.833694 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 18 00:29:59 crc kubenswrapper[4847]: I0218 00:29:59.901779 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.103914 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.138924 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.170219 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.183867 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.188346 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.232822 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.243134 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.340941 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.460373 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.464792 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.551927 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.562033 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.614497 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.643901 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.691312 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.881082 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 18 00:30:00 crc kubenswrapper[4847]: I0218 00:30:00.913385 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.236971 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.245636 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.277818 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.297220 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.365551 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.377892 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.386234 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.405332 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.418217 4847 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.418447 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.461036 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.492203 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.502292 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.578081 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.589002 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.614782 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.623576 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.633559 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.636356 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.711385 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.744367 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.788732 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.791390 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.815180 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.840157 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:30:01 crc kubenswrapper[4847]: I0218 00:30:01.920419 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.004974 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.055045 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.082317 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.107963 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.152976 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.270202 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.270504 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.281817 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.358977 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.388962 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.393014 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.408189 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.408220 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.426241 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.586739 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.697313 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.752847 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 18 00:30:02 crc kubenswrapper[4847]: I0218 00:30:02.813558 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.027332 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.083378 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.153183 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.236437 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.278071 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.366718 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.369081 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.433853 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.457061 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.479634 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.502755 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.536212 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.539437 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.580854 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.652682 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.718349 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.722972 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.885807 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.893233 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.894716 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.966121 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 18 00:30:03 crc kubenswrapper[4847]: I0218 00:30:03.971587 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.100981 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.112533 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.155066 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.176411 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.200942 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.217855 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.288750 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.290422 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.414118 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.439997 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.455370 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.480203 4847 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.484339 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.484386 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.488537 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.505121 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.505100566 podStartE2EDuration="22.505100566s" podCreationTimestamp="2026-02-18 00:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:30:04.501173072 +0000 UTC m=+277.878524014" watchObservedRunningTime="2026-02-18 00:30:04.505100566 +0000 UTC m=+277.882451508" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.522745 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.601086 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.777406 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.817715 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.844382 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 18 00:30:04 crc kubenswrapper[4847]: I0218 00:30:04.984090 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.087629 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.136109 4847 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.136553 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479" gracePeriod=5 Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.154227 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.169455 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.309266 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.363361 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.485452 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.488290 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.497527 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.537052 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.557874 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.560629 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.569179 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.765249 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.805234 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.902151 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 18 00:30:05 crc kubenswrapper[4847]: I0218 00:30:05.920054 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.089004 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.099975 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.141708 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.481171 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.481633 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.747290 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.914964 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.955800 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.977399 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 18 00:30:06 crc kubenswrapper[4847]: I0218 00:30:06.983543 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.068397 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.080230 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.157629 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.166762 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.184307 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.403433 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.410872 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.618062 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.707724 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.773029 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.794430 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.819118 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.900109 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.905917 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.932029 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.955653 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 18 00:30:07 crc kubenswrapper[4847]: I0218 00:30:07.982852 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.027061 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.061370 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.137166 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.181169 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.262144 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.313584 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.367458 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.453490 4847 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.595418 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.656701 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.656788 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 18 00:30:08 crc kubenswrapper[4847]: I0218 00:30:08.793446 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 18 00:30:09 crc kubenswrapper[4847]: I0218 00:30:09.085519 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 18 00:30:09 crc kubenswrapper[4847]: I0218 00:30:09.105456 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 18 00:30:09 crc kubenswrapper[4847]: I0218 00:30:09.341337 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 18 00:30:09 crc kubenswrapper[4847]: I0218 00:30:09.554778 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 18 00:30:09 crc kubenswrapper[4847]: I0218 00:30:09.883789 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.056240 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.117169 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.161074 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.289070 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.351602 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.518202 4847 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.553399 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.709128 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.709229 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.724402 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.806887 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.806962 4847 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479" exitCode=137 Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.807032 4847 scope.go:117] "RemoveContainer" containerID="4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.807043 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823195 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823250 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823293 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823359 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823371 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823393 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823428 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823407 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823500 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823764 4847 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823780 4847 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823789 4847 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.823799 4847 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.836728 4847 scope.go:117] "RemoveContainer" containerID="4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479" Feb 18 00:30:10 crc kubenswrapper[4847]: E0218 00:30:10.837396 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479\": container with ID starting with 4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479 not found: ID does not exist" containerID="4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.837432 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479"} err="failed to get container status \"4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479\": rpc error: code = NotFound desc = could not find container \"4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479\": container with ID starting with 4937de5efe9c69d1cff03af80019ba3d520ca77ef1084cc3b31a17b69cd9a479 not found: ID does not exist" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.837920 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.921754 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 18 00:30:10 crc kubenswrapper[4847]: I0218 00:30:10.925228 4847 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:11 crc kubenswrapper[4847]: I0218 00:30:11.200676 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 18 00:30:11 crc kubenswrapper[4847]: I0218 00:30:11.415019 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 18 00:30:11 crc kubenswrapper[4847]: I0218 00:30:11.727693 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 18 00:30:11 crc kubenswrapper[4847]: I0218 00:30:11.791644 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 18 00:30:12 crc kubenswrapper[4847]: I0218 00:30:12.199669 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 18 00:30:12 crc kubenswrapper[4847]: I0218 00:30:12.218918 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 18 00:30:27 crc kubenswrapper[4847]: I0218 00:30:27.172017 4847 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 18 00:30:27 crc kubenswrapper[4847]: I0218 00:30:27.926998 4847 generic.go:334] "Generic (PLEG): container finished" podID="daaf1919-f9da-4151-8932-4c77a478b531" containerID="34afd9253b44d482a3989efcbcdab02562d255f656cc1aeeb56b685568c1089a" exitCode=0 Feb 18 00:30:27 crc kubenswrapper[4847]: I0218 00:30:27.927097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" event={"ID":"daaf1919-f9da-4151-8932-4c77a478b531","Type":"ContainerDied","Data":"34afd9253b44d482a3989efcbcdab02562d255f656cc1aeeb56b685568c1089a"} Feb 18 00:30:27 crc kubenswrapper[4847]: I0218 00:30:27.928791 4847 scope.go:117] "RemoveContainer" containerID="34afd9253b44d482a3989efcbcdab02562d255f656cc1aeeb56b685568c1089a" Feb 18 00:30:28 crc kubenswrapper[4847]: I0218 00:30:28.937543 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" event={"ID":"daaf1919-f9da-4151-8932-4c77a478b531","Type":"ContainerStarted","Data":"e96d27a812f4d7adee4a31259ac60cad862f0e1a7aac742e8d46a645288837a4"} Feb 18 00:30:28 crc kubenswrapper[4847]: I0218 00:30:28.938142 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:30:28 crc kubenswrapper[4847]: I0218 00:30:28.941660 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:30:29 crc kubenswrapper[4847]: I0218 00:30:29.946211 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 00:30:29 crc kubenswrapper[4847]: I0218 00:30:29.949552 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 18 00:30:29 crc kubenswrapper[4847]: I0218 00:30:29.949685 4847 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="21ff0a004dfd85a8d7b1d1b806b95894f90fe652d3a2ab936836e9b81a7c4fd8" exitCode=137 Feb 18 00:30:29 crc kubenswrapper[4847]: I0218 00:30:29.950634 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"21ff0a004dfd85a8d7b1d1b806b95894f90fe652d3a2ab936836e9b81a7c4fd8"} Feb 18 00:30:29 crc kubenswrapper[4847]: I0218 00:30:29.950701 4847 scope.go:117] "RemoveContainer" containerID="e31ee1357057ab19b9424899006199d367768a61acb0e0101dfa35e713ccccaa" Feb 18 00:30:30 crc kubenswrapper[4847]: I0218 00:30:30.958294 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 18 00:30:30 crc kubenswrapper[4847]: I0218 00:30:30.960782 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"202c9c5fbce06ad507123b892355b3bc84b57ef8ba023d00b570aa64be50c4db"} Feb 18 00:30:33 crc kubenswrapper[4847]: I0218 00:30:33.935318 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:30:39 crc kubenswrapper[4847]: I0218 00:30:39.785715 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:30:39 crc kubenswrapper[4847]: I0218 00:30:39.791180 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:30:40 crc kubenswrapper[4847]: I0218 00:30:40.034255 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:30:53 crc kubenswrapper[4847]: I0218 00:30:53.492141 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:30:53 crc kubenswrapper[4847]: I0218 00:30:53.492950 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.719570 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t"] Feb 18 00:30:54 crc kubenswrapper[4847]: E0218 00:30:54.719813 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" containerName="installer" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.719827 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" containerName="installer" Feb 18 00:30:54 crc kubenswrapper[4847]: E0218 00:30:54.719846 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.719852 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.719953 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.719967 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb04bef-d533-4b90-9c2e-71d6a27ce5a5" containerName="installer" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.720361 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.722665 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.724089 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.740169 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t"] Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.759302 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.759569 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" podUID="464f104b-7665-4b2c-a507-81b166174685" containerName="controller-manager" containerID="cri-o://ffeb563488f3ac19adb870a29390b14bae2910d94dc77b2fb06e32cd9154cf2c" gracePeriod=30 Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.779721 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.780046 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerName="route-controller-manager" containerID="cri-o://e692c229fa1cffc22b7c4b55c72f3527c30793a8ae89f29d604239e69a72ab2b" gracePeriod=30 Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.832230 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.832316 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.832396 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fhws\" (UniqueName: \"kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.933696 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fhws\" (UniqueName: \"kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.933773 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.933808 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.934800 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.942167 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:54 crc kubenswrapper[4847]: I0218 00:30:54.954593 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fhws\" (UniqueName: \"kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws\") pod \"collect-profiles-29522910-6ss5t\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.038000 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.151018 4847 generic.go:334] "Generic (PLEG): container finished" podID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerID="e692c229fa1cffc22b7c4b55c72f3527c30793a8ae89f29d604239e69a72ab2b" exitCode=0 Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.151118 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" event={"ID":"78d08277-0a0a-4e0a-ab40-803bfdd76e29","Type":"ContainerDied","Data":"e692c229fa1cffc22b7c4b55c72f3527c30793a8ae89f29d604239e69a72ab2b"} Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.153974 4847 generic.go:334] "Generic (PLEG): container finished" podID="464f104b-7665-4b2c-a507-81b166174685" containerID="ffeb563488f3ac19adb870a29390b14bae2910d94dc77b2fb06e32cd9154cf2c" exitCode=0 Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.154056 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" event={"ID":"464f104b-7665-4b2c-a507-81b166174685","Type":"ContainerDied","Data":"ffeb563488f3ac19adb870a29390b14bae2910d94dc77b2fb06e32cd9154cf2c"} Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.371786 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.416303 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.426056 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t"] Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.574255 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwcz2\" (UniqueName: \"kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2\") pod \"464f104b-7665-4b2c-a507-81b166174685\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.574874 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbzwr\" (UniqueName: \"kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr\") pod \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.574925 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles\") pod \"464f104b-7665-4b2c-a507-81b166174685\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.574971 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config\") pod \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.575030 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca\") pod \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.575063 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert\") pod \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\" (UID: \"78d08277-0a0a-4e0a-ab40-803bfdd76e29\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.575114 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca\") pod \"464f104b-7665-4b2c-a507-81b166174685\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.575152 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config\") pod \"464f104b-7665-4b2c-a507-81b166174685\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.575176 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert\") pod \"464f104b-7665-4b2c-a507-81b166174685\" (UID: \"464f104b-7665-4b2c-a507-81b166174685\") " Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576199 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "464f104b-7665-4b2c-a507-81b166174685" (UID: "464f104b-7665-4b2c-a507-81b166174685"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576212 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca" (OuterVolumeSpecName: "client-ca") pod "78d08277-0a0a-4e0a-ab40-803bfdd76e29" (UID: "78d08277-0a0a-4e0a-ab40-803bfdd76e29"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576239 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca" (OuterVolumeSpecName: "client-ca") pod "464f104b-7665-4b2c-a507-81b166174685" (UID: "464f104b-7665-4b2c-a507-81b166174685"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576402 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config" (OuterVolumeSpecName: "config") pod "78d08277-0a0a-4e0a-ab40-803bfdd76e29" (UID: "78d08277-0a0a-4e0a-ab40-803bfdd76e29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576579 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config" (OuterVolumeSpecName: "config") pod "464f104b-7665-4b2c-a507-81b166174685" (UID: "464f104b-7665-4b2c-a507-81b166174685"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576890 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576922 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576939 4847 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78d08277-0a0a-4e0a-ab40-803bfdd76e29-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576956 4847 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.576970 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/464f104b-7665-4b2c-a507-81b166174685-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.583764 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "464f104b-7665-4b2c-a507-81b166174685" (UID: "464f104b-7665-4b2c-a507-81b166174685"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.584003 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78d08277-0a0a-4e0a-ab40-803bfdd76e29" (UID: "78d08277-0a0a-4e0a-ab40-803bfdd76e29"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.584402 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2" (OuterVolumeSpecName: "kube-api-access-hwcz2") pod "464f104b-7665-4b2c-a507-81b166174685" (UID: "464f104b-7665-4b2c-a507-81b166174685"). InnerVolumeSpecName "kube-api-access-hwcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.585365 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr" (OuterVolumeSpecName: "kube-api-access-wbzwr") pod "78d08277-0a0a-4e0a-ab40-803bfdd76e29" (UID: "78d08277-0a0a-4e0a-ab40-803bfdd76e29"). InnerVolumeSpecName "kube-api-access-wbzwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.677964 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwcz2\" (UniqueName: \"kubernetes.io/projected/464f104b-7665-4b2c-a507-81b166174685-kube-api-access-hwcz2\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.678002 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbzwr\" (UniqueName: \"kubernetes.io/projected/78d08277-0a0a-4e0a-ab40-803bfdd76e29-kube-api-access-wbzwr\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.678015 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78d08277-0a0a-4e0a-ab40-803bfdd76e29-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:55 crc kubenswrapper[4847]: I0218 00:30:55.678028 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/464f104b-7665-4b2c-a507-81b166174685-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.164519 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.164781 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-tmbbz" event={"ID":"464f104b-7665-4b2c-a507-81b166174685","Type":"ContainerDied","Data":"d7e4ede30a89a71c3cc222778589bf2605ca10adbfedfaeff09f9b9e5a4e9eaa"} Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.164875 4847 scope.go:117] "RemoveContainer" containerID="ffeb563488f3ac19adb870a29390b14bae2910d94dc77b2fb06e32cd9154cf2c" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.168283 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" event={"ID":"78d08277-0a0a-4e0a-ab40-803bfdd76e29","Type":"ContainerDied","Data":"79e09f2b731bd96d65be88520fbdd8385a86a9d76f46d91c21e6bde053a8a87a"} Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.168458 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.170944 4847 generic.go:334] "Generic (PLEG): container finished" podID="49dd2490-6e51-4d9b-afea-1f1c33f7fa21" containerID="b609f841f279a2079f88822b82565fe4b882b95302e98fb98bfff804ece01769" exitCode=0 Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.170976 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" event={"ID":"49dd2490-6e51-4d9b-afea-1f1c33f7fa21","Type":"ContainerDied","Data":"b609f841f279a2079f88822b82565fe4b882b95302e98fb98bfff804ece01769"} Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.170991 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" event={"ID":"49dd2490-6e51-4d9b-afea-1f1c33f7fa21","Type":"ContainerStarted","Data":"062f32b7b8bf9c26b639edc859853397a63fdc541129d3bcab91c74412ea89dd"} Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.197956 4847 scope.go:117] "RemoveContainer" containerID="e692c229fa1cffc22b7c4b55c72f3527c30793a8ae89f29d604239e69a72ab2b" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.223661 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.229500 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-c7dv2"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.234560 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.238017 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-tmbbz"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.579470 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:30:56 crc kubenswrapper[4847]: E0218 00:30:56.579862 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464f104b-7665-4b2c-a507-81b166174685" containerName="controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.579887 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="464f104b-7665-4b2c-a507-81b166174685" containerName="controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: E0218 00:30:56.579907 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerName="route-controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.579917 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerName="route-controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.580086 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" containerName="route-controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.580112 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="464f104b-7665-4b2c-a507-81b166174685" containerName="controller-manager" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.580726 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.583981 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.584423 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.585071 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.585826 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.586552 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.586653 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.586675 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.587936 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595244 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595415 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595466 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595472 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595537 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-client-ca\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595565 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b7510-1624-4aac-bd35-c93f19743e55-serving-cert\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595585 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595619 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595678 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbsv\" (UniqueName: \"kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595713 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86g2h\" (UniqueName: \"kubernetes.io/projected/724b7510-1624-4aac-bd35-c93f19743e55-kube-api-access-86g2h\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595733 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-config\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595846 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.595937 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.596116 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.596304 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.598884 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.616732 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.621155 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697433 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697504 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697554 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-client-ca\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697586 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697644 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b7510-1624-4aac-bd35-c93f19743e55-serving-cert\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697678 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697721 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbsv\" (UniqueName: \"kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697768 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86g2h\" (UniqueName: \"kubernetes.io/projected/724b7510-1624-4aac-bd35-c93f19743e55-kube-api-access-86g2h\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.697790 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-config\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.699690 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-config\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.699815 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/724b7510-1624-4aac-bd35-c93f19743e55-client-ca\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.699950 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.700341 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.700416 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.705316 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.705377 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/724b7510-1624-4aac-bd35-c93f19743e55-serving-cert\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.722015 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86g2h\" (UniqueName: \"kubernetes.io/projected/724b7510-1624-4aac-bd35-c93f19743e55-kube-api-access-86g2h\") pod \"route-controller-manager-7956996f9-bwndv\" (UID: \"724b7510-1624-4aac-bd35-c93f19743e55\") " pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.724950 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbsv\" (UniqueName: \"kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv\") pod \"controller-manager-6889d7b855-r6nw4\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.907714 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:56 crc kubenswrapper[4847]: I0218 00:30:56.946275 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.163327 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:30:57 crc kubenswrapper[4847]: W0218 00:30:57.179533 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380cc2f5_0ff8_4c80_8ebe_5afc80e47acf.slice/crio-22d4b725545fb1a79abc851cbaf74a08619bef550f015786d70d98020f1fcd91 WatchSource:0}: Error finding container 22d4b725545fb1a79abc851cbaf74a08619bef550f015786d70d98020f1fcd91: Status 404 returned error can't find the container with id 22d4b725545fb1a79abc851cbaf74a08619bef550f015786d70d98020f1fcd91 Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.216343 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv"] Feb 18 00:30:57 crc kubenswrapper[4847]: W0218 00:30:57.229449 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod724b7510_1624_4aac_bd35_c93f19743e55.slice/crio-cb48e6d5069c4bb93bc0107ce5161b5fb2fcdcb315bb8b79c755fcb2db9da513 WatchSource:0}: Error finding container cb48e6d5069c4bb93bc0107ce5161b5fb2fcdcb315bb8b79c755fcb2db9da513: Status 404 returned error can't find the container with id cb48e6d5069c4bb93bc0107ce5161b5fb2fcdcb315bb8b79c755fcb2db9da513 Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.422886 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464f104b-7665-4b2c-a507-81b166174685" path="/var/lib/kubelet/pods/464f104b-7665-4b2c-a507-81b166174685/volumes" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.426977 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d08277-0a0a-4e0a-ab40-803bfdd76e29" path="/var/lib/kubelet/pods/78d08277-0a0a-4e0a-ab40-803bfdd76e29/volumes" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.430040 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.509420 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume\") pod \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.509480 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fhws\" (UniqueName: \"kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws\") pod \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.509518 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume\") pod \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\" (UID: \"49dd2490-6e51-4d9b-afea-1f1c33f7fa21\") " Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.511195 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume" (OuterVolumeSpecName: "config-volume") pod "49dd2490-6e51-4d9b-afea-1f1c33f7fa21" (UID: "49dd2490-6e51-4d9b-afea-1f1c33f7fa21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.517382 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws" (OuterVolumeSpecName: "kube-api-access-9fhws") pod "49dd2490-6e51-4d9b-afea-1f1c33f7fa21" (UID: "49dd2490-6e51-4d9b-afea-1f1c33f7fa21"). InnerVolumeSpecName "kube-api-access-9fhws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.517699 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49dd2490-6e51-4d9b-afea-1f1c33f7fa21" (UID: "49dd2490-6e51-4d9b-afea-1f1c33f7fa21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.616381 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.616927 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fhws\" (UniqueName: \"kubernetes.io/projected/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-kube-api-access-9fhws\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:57 crc kubenswrapper[4847]: I0218 00:30:57.616990 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49dd2490-6e51-4d9b-afea-1f1c33f7fa21-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.198763 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" event={"ID":"724b7510-1624-4aac-bd35-c93f19743e55","Type":"ContainerStarted","Data":"d5ea9b7911acaa7991a034fcf5ce446bc5df8426e303e14804e3d8ad69f818ba"} Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.199424 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.199444 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" event={"ID":"724b7510-1624-4aac-bd35-c93f19743e55","Type":"ContainerStarted","Data":"cb48e6d5069c4bb93bc0107ce5161b5fb2fcdcb315bb8b79c755fcb2db9da513"} Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.200810 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.201616 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t" event={"ID":"49dd2490-6e51-4d9b-afea-1f1c33f7fa21","Type":"ContainerDied","Data":"062f32b7b8bf9c26b639edc859853397a63fdc541129d3bcab91c74412ea89dd"} Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.201696 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="062f32b7b8bf9c26b639edc859853397a63fdc541129d3bcab91c74412ea89dd" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.202848 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" event={"ID":"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf","Type":"ContainerStarted","Data":"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53"} Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.202900 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" event={"ID":"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf","Type":"ContainerStarted","Data":"22d4b725545fb1a79abc851cbaf74a08619bef550f015786d70d98020f1fcd91"} Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.203370 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.206630 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.208155 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.228039 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7956996f9-bwndv" podStartSLOduration=4.228009143 podStartE2EDuration="4.228009143s" podCreationTimestamp="2026-02-18 00:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:30:58.222229397 +0000 UTC m=+331.599580339" watchObservedRunningTime="2026-02-18 00:30:58.228009143 +0000 UTC m=+331.605360115" Feb 18 00:30:58 crc kubenswrapper[4847]: I0218 00:30:58.243317 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" podStartSLOduration=4.243280949 podStartE2EDuration="4.243280949s" podCreationTimestamp="2026-02-18 00:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:30:58.238900556 +0000 UTC m=+331.616251518" watchObservedRunningTime="2026-02-18 00:30:58.243280949 +0000 UTC m=+331.620631921" Feb 18 00:31:23 crc kubenswrapper[4847]: I0218 00:31:23.491884 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:31:23 crc kubenswrapper[4847]: I0218 00:31:23.494835 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.908657 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.911187 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tqxr4" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="registry-server" containerID="cri-o://72e03cbd8e7dfb83be77793ecd1727d0259fe8690ac922c4d08a1b712eeb3d3a" gracePeriod=30 Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.928970 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.929759 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-px9xt" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="registry-server" containerID="cri-o://16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76" gracePeriod=30 Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.945283 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.945627 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" containerID="cri-o://e96d27a812f4d7adee4a31259ac60cad862f0e1a7aac742e8d46a645288837a4" gracePeriod=30 Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.957709 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.958206 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bv8f2" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="registry-server" containerID="cri-o://14e756059bbf6d2dcfde255e43a1bae7c1d3a3fd429e8481a40dfac04eb30656" gracePeriod=30 Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.975454 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4dxcv"] Feb 18 00:31:42 crc kubenswrapper[4847]: E0218 00:31:42.976001 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49dd2490-6e51-4d9b-afea-1f1c33f7fa21" containerName="collect-profiles" Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.976025 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="49dd2490-6e51-4d9b-afea-1f1c33f7fa21" containerName="collect-profiles" Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.976335 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="49dd2490-6e51-4d9b-afea-1f1c33f7fa21" containerName="collect-profiles" Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.977119 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.982383 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:31:42 crc kubenswrapper[4847]: I0218 00:31:42.982741 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lr9xc" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="registry-server" containerID="cri-o://43f086b3b289848710c80c7c5bf69ee1dc5feed3f63a10ccf01fa8dae64e6365" gracePeriod=30 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.001459 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4dxcv"] Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.122641 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.122725 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwgsm\" (UniqueName: \"kubernetes.io/projected/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-kube-api-access-xwgsm\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.122767 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.225538 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.225625 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.225683 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwgsm\" (UniqueName: \"kubernetes.io/projected/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-kube-api-access-xwgsm\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.227329 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.245009 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.245119 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwgsm\" (UniqueName: \"kubernetes.io/projected/a3803e77-d427-4d42-9e2e-c8fa87bca4d8-kube-api-access-xwgsm\") pod \"marketplace-operator-79b997595-4dxcv\" (UID: \"a3803e77-d427-4d42-9e2e-c8fa87bca4d8\") " pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.434661 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.458227 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.577348 4847 generic.go:334] "Generic (PLEG): container finished" podID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerID="72e03cbd8e7dfb83be77793ecd1727d0259fe8690ac922c4d08a1b712eeb3d3a" exitCode=0 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.577394 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerDied","Data":"72e03cbd8e7dfb83be77793ecd1727d0259fe8690ac922c4d08a1b712eeb3d3a"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.581165 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.588627 4847 generic.go:334] "Generic (PLEG): container finished" podID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerID="16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76" exitCode=0 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.588721 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerDied","Data":"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.588751 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-px9xt" event={"ID":"bb0e353b-9f34-432f-92f1-9102f53aeff3","Type":"ContainerDied","Data":"c37d6faa41a67d8cf833464a7644ff61cb3fe8b1b74781af65ecbc1b50170c1b"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.588772 4847 scope.go:117] "RemoveContainer" containerID="16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.588929 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-px9xt" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.591487 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.594998 4847 generic.go:334] "Generic (PLEG): container finished" podID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerID="43f086b3b289848710c80c7c5bf69ee1dc5feed3f63a10ccf01fa8dae64e6365" exitCode=0 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.595085 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerDied","Data":"43f086b3b289848710c80c7c5bf69ee1dc5feed3f63a10ccf01fa8dae64e6365"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.596193 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.598276 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.600711 4847 generic.go:334] "Generic (PLEG): container finished" podID="767c924a-1203-477f-8501-a65f63965047" containerID="14e756059bbf6d2dcfde255e43a1bae7c1d3a3fd429e8481a40dfac04eb30656" exitCode=0 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.600772 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bv8f2" event={"ID":"767c924a-1203-477f-8501-a65f63965047","Type":"ContainerDied","Data":"14e756059bbf6d2dcfde255e43a1bae7c1d3a3fd429e8481a40dfac04eb30656"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.600848 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bv8f2" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.618829 4847 generic.go:334] "Generic (PLEG): container finished" podID="daaf1919-f9da-4151-8932-4c77a478b531" containerID="e96d27a812f4d7adee4a31259ac60cad862f0e1a7aac742e8d46a645288837a4" exitCode=0 Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.618903 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" event={"ID":"daaf1919-f9da-4151-8932-4c77a478b531","Type":"ContainerDied","Data":"e96d27a812f4d7adee4a31259ac60cad862f0e1a7aac742e8d46a645288837a4"} Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.618928 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hwsk5" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.628694 4847 scope.go:117] "RemoveContainer" containerID="86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635409 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content\") pod \"bb0e353b-9f34-432f-92f1-9102f53aeff3\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635442 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zt2q\" (UniqueName: \"kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q\") pod \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635470 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities\") pod \"767c924a-1203-477f-8501-a65f63965047\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635491 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2t5j\" (UniqueName: \"kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j\") pod \"767c924a-1203-477f-8501-a65f63965047\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635508 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities\") pod \"67f6671f-0af7-44a3-9204-8fa77554d1d1\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635532 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca\") pod \"daaf1919-f9da-4151-8932-4c77a478b531\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635586 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjz6q\" (UniqueName: \"kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q\") pod \"bb0e353b-9f34-432f-92f1-9102f53aeff3\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635617 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content\") pod \"67f6671f-0af7-44a3-9204-8fa77554d1d1\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635655 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities\") pod \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635675 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content\") pod \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\" (UID: \"4c5d23e9-80d6-4df1-9484-3d5d452231f6\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635700 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqs7m\" (UniqueName: \"kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m\") pod \"daaf1919-f9da-4151-8932-4c77a478b531\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635755 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics\") pod \"daaf1919-f9da-4151-8932-4c77a478b531\" (UID: \"daaf1919-f9da-4151-8932-4c77a478b531\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635776 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities\") pod \"bb0e353b-9f34-432f-92f1-9102f53aeff3\" (UID: \"bb0e353b-9f34-432f-92f1-9102f53aeff3\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635793 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content\") pod \"767c924a-1203-477f-8501-a65f63965047\" (UID: \"767c924a-1203-477f-8501-a65f63965047\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.635811 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8zjk\" (UniqueName: \"kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk\") pod \"67f6671f-0af7-44a3-9204-8fa77554d1d1\" (UID: \"67f6671f-0af7-44a3-9204-8fa77554d1d1\") " Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.637154 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities" (OuterVolumeSpecName: "utilities") pod "67f6671f-0af7-44a3-9204-8fa77554d1d1" (UID: "67f6671f-0af7-44a3-9204-8fa77554d1d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.638169 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "daaf1919-f9da-4151-8932-4c77a478b531" (UID: "daaf1919-f9da-4151-8932-4c77a478b531"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.639015 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities" (OuterVolumeSpecName: "utilities") pod "bb0e353b-9f34-432f-92f1-9102f53aeff3" (UID: "bb0e353b-9f34-432f-92f1-9102f53aeff3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.639400 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities" (OuterVolumeSpecName: "utilities") pod "767c924a-1203-477f-8501-a65f63965047" (UID: "767c924a-1203-477f-8501-a65f63965047"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.642488 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities" (OuterVolumeSpecName: "utilities") pod "4c5d23e9-80d6-4df1-9484-3d5d452231f6" (UID: "4c5d23e9-80d6-4df1-9484-3d5d452231f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.652374 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk" (OuterVolumeSpecName: "kube-api-access-n8zjk") pod "67f6671f-0af7-44a3-9204-8fa77554d1d1" (UID: "67f6671f-0af7-44a3-9204-8fa77554d1d1"). InnerVolumeSpecName "kube-api-access-n8zjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.663992 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m" (OuterVolumeSpecName: "kube-api-access-qqs7m") pod "daaf1919-f9da-4151-8932-4c77a478b531" (UID: "daaf1919-f9da-4151-8932-4c77a478b531"). InnerVolumeSpecName "kube-api-access-qqs7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.672473 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j" (OuterVolumeSpecName: "kube-api-access-s2t5j") pod "767c924a-1203-477f-8501-a65f63965047" (UID: "767c924a-1203-477f-8501-a65f63965047"). InnerVolumeSpecName "kube-api-access-s2t5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.675572 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "daaf1919-f9da-4151-8932-4c77a478b531" (UID: "daaf1919-f9da-4151-8932-4c77a478b531"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.678444 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q" (OuterVolumeSpecName: "kube-api-access-4zt2q") pod "4c5d23e9-80d6-4df1-9484-3d5d452231f6" (UID: "4c5d23e9-80d6-4df1-9484-3d5d452231f6"). InnerVolumeSpecName "kube-api-access-4zt2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.680051 4847 scope.go:117] "RemoveContainer" containerID="eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.693323 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q" (OuterVolumeSpecName: "kube-api-access-kjz6q") pod "bb0e353b-9f34-432f-92f1-9102f53aeff3" (UID: "bb0e353b-9f34-432f-92f1-9102f53aeff3"). InnerVolumeSpecName "kube-api-access-kjz6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.695407 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "767c924a-1203-477f-8501-a65f63965047" (UID: "767c924a-1203-477f-8501-a65f63965047"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.705328 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c5d23e9-80d6-4df1-9484-3d5d452231f6" (UID: "4c5d23e9-80d6-4df1-9484-3d5d452231f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.706881 4847 scope.go:117] "RemoveContainer" containerID="16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76" Feb 18 00:31:43 crc kubenswrapper[4847]: E0218 00:31:43.712286 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76\": container with ID starting with 16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76 not found: ID does not exist" containerID="16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.712330 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76"} err="failed to get container status \"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76\": rpc error: code = NotFound desc = could not find container \"16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76\": container with ID starting with 16ecd6be9264b640e03a04e44a816a1e8998a6a9f3e40b64cab55fc4b7ecaa76 not found: ID does not exist" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.712358 4847 scope.go:117] "RemoveContainer" containerID="86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a" Feb 18 00:31:43 crc kubenswrapper[4847]: E0218 00:31:43.712983 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a\": container with ID starting with 86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a not found: ID does not exist" containerID="86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.713044 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a"} err="failed to get container status \"86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a\": rpc error: code = NotFound desc = could not find container \"86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a\": container with ID starting with 86c01a3e4aeb957109777a1d7f3e7fbae13bddc3dadc1db183ff3bee09da9b1a not found: ID does not exist" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.713093 4847 scope.go:117] "RemoveContainer" containerID="eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a" Feb 18 00:31:43 crc kubenswrapper[4847]: E0218 00:31:43.713454 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a\": container with ID starting with eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a not found: ID does not exist" containerID="eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.713492 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a"} err="failed to get container status \"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a\": rpc error: code = NotFound desc = could not find container \"eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a\": container with ID starting with eeec3c165dc7b83984731197b9f6f528474c5137a845a7e99ff2536b2a38b16a not found: ID does not exist" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.713510 4847 scope.go:117] "RemoveContainer" containerID="14e756059bbf6d2dcfde255e43a1bae7c1d3a3fd429e8481a40dfac04eb30656" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.729396 4847 scope.go:117] "RemoveContainer" containerID="294658603ba404284b4ed09ccc3da841ce08fddc1385132aede0ae99ea303576" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737621 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737655 4847 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737671 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjz6q\" (UniqueName: \"kubernetes.io/projected/bb0e353b-9f34-432f-92f1-9102f53aeff3-kube-api-access-kjz6q\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737685 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737698 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c5d23e9-80d6-4df1-9484-3d5d452231f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737712 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqs7m\" (UniqueName: \"kubernetes.io/projected/daaf1919-f9da-4151-8932-4c77a478b531-kube-api-access-qqs7m\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737725 4847 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/daaf1919-f9da-4151-8932-4c77a478b531-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737740 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737751 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737768 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8zjk\" (UniqueName: \"kubernetes.io/projected/67f6671f-0af7-44a3-9204-8fa77554d1d1-kube-api-access-n8zjk\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737780 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zt2q\" (UniqueName: \"kubernetes.io/projected/4c5d23e9-80d6-4df1-9484-3d5d452231f6-kube-api-access-4zt2q\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737792 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/767c924a-1203-477f-8501-a65f63965047-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.737804 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2t5j\" (UniqueName: \"kubernetes.io/projected/767c924a-1203-477f-8501-a65f63965047-kube-api-access-s2t5j\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.744864 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb0e353b-9f34-432f-92f1-9102f53aeff3" (UID: "bb0e353b-9f34-432f-92f1-9102f53aeff3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.745032 4847 scope.go:117] "RemoveContainer" containerID="e92ff5160c160d87e9df4f057b7bf81f0d8a6d862fc449ea593af3bf458eeb98" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.762212 4847 scope.go:117] "RemoveContainer" containerID="e96d27a812f4d7adee4a31259ac60cad862f0e1a7aac742e8d46a645288837a4" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.776136 4847 scope.go:117] "RemoveContainer" containerID="34afd9253b44d482a3989efcbcdab02562d255f656cc1aeeb56b685568c1089a" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.792104 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67f6671f-0af7-44a3-9204-8fa77554d1d1" (UID: "67f6671f-0af7-44a3-9204-8fa77554d1d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.839738 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0e353b-9f34-432f-92f1-9102f53aeff3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.839784 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f6671f-0af7-44a3-9204-8fa77554d1d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.963953 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4dxcv"] Feb 18 00:31:43 crc kubenswrapper[4847]: I0218 00:31:43.987905 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.001766 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-px9xt"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.008309 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.015251 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bv8f2"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.020246 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.024735 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hwsk5"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.631041 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tqxr4" event={"ID":"4c5d23e9-80d6-4df1-9484-3d5d452231f6","Type":"ContainerDied","Data":"76be970c28143bf498e5fa4fe1e291728c3cf57fa59966ed89f8f0b127f17816"} Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.631136 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tqxr4" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.631401 4847 scope.go:117] "RemoveContainer" containerID="72e03cbd8e7dfb83be77793ecd1727d0259fe8690ac922c4d08a1b712eeb3d3a" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.636747 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lr9xc" event={"ID":"67f6671f-0af7-44a3-9204-8fa77554d1d1","Type":"ContainerDied","Data":"62ad798fb38bcf35da45f54e13b263c3ca6ae6dec395357d48327edaa36b452e"} Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.636799 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lr9xc" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.641804 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" event={"ID":"a3803e77-d427-4d42-9e2e-c8fa87bca4d8","Type":"ContainerStarted","Data":"682102af74753be3c5ae32793e2da0e5a15b25095b53327ae83bca8fb3d48329"} Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.641847 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" event={"ID":"a3803e77-d427-4d42-9e2e-c8fa87bca4d8","Type":"ContainerStarted","Data":"31d42e751a0be1649dacc4cd2a0252134781bd6e6faba6cf748932c63875e277"} Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.642552 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.647451 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.662673 4847 scope.go:117] "RemoveContainer" containerID="f128dfb0226ce3ff25bc536358c92a3dcfda6686f89b68d0dc5fef9e0d2f2bce" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.677146 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" podStartSLOduration=2.67712066 podStartE2EDuration="2.67712066s" podCreationTimestamp="2026-02-18 00:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:31:44.668530938 +0000 UTC m=+378.045881880" watchObservedRunningTime="2026-02-18 00:31:44.67712066 +0000 UTC m=+378.054471602" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.687542 4847 scope.go:117] "RemoveContainer" containerID="ba92f2ec1a88fd702420bfc78805e15054a1f478c279b03859220391618e4491" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.704813 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.713961 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tqxr4"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.718674 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.720878 4847 scope.go:117] "RemoveContainer" containerID="43f086b3b289848710c80c7c5bf69ee1dc5feed3f63a10ccf01fa8dae64e6365" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.723186 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lr9xc"] Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.742851 4847 scope.go:117] "RemoveContainer" containerID="90b9a71fd35a2013abfcbab5bc4b2a5ce4ed994c23284cfeb2427d681386f054" Feb 18 00:31:44 crc kubenswrapper[4847]: I0218 00:31:44.762610 4847 scope.go:117] "RemoveContainer" containerID="95c6af5db99a1eef682e3ab701a20369b8127dd271a008ea5972dd0367c5a48d" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.412644 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" path="/var/lib/kubelet/pods/4c5d23e9-80d6-4df1-9484-3d5d452231f6/volumes" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.413347 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" path="/var/lib/kubelet/pods/67f6671f-0af7-44a3-9204-8fa77554d1d1/volumes" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.413956 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="767c924a-1203-477f-8501-a65f63965047" path="/var/lib/kubelet/pods/767c924a-1203-477f-8501-a65f63965047/volumes" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.414967 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" path="/var/lib/kubelet/pods/bb0e353b-9f34-432f-92f1-9102f53aeff3/volumes" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.415642 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daaf1919-f9da-4151-8932-4c77a478b531" path="/var/lib/kubelet/pods/daaf1919-f9da-4151-8932-4c77a478b531/volumes" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.732528 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwtpz"] Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.732893 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.732914 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.732933 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.732946 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.732962 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.732974 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.732986 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.732998 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733010 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733020 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="extract-utilities" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733038 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733048 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733059 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733070 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733088 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733100 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733122 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733133 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733147 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733159 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733179 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733189 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733203 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733216 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733232 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733244 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="extract-content" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733414 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0e353b-9f34-432f-92f1-9102f53aeff3" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733442 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5d23e9-80d6-4df1-9484-3d5d452231f6" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733459 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733479 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f6671f-0af7-44a3-9204-8fa77554d1d1" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733494 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="767c924a-1203-477f-8501-a65f63965047" containerName="registry-server" Feb 18 00:31:45 crc kubenswrapper[4847]: E0218 00:31:45.733687 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733702 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.733862 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="daaf1919-f9da-4151-8932-4c77a478b531" containerName="marketplace-operator" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.734796 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.736829 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.751677 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwtpz"] Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.795954 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-catalog-content\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.796447 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4cns\" (UniqueName: \"kubernetes.io/projected/2edc1248-18dd-42c6-878e-c3e073b33aaa-kube-api-access-q4cns\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.796680 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-utilities\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.898103 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-catalog-content\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.898450 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4cns\" (UniqueName: \"kubernetes.io/projected/2edc1248-18dd-42c6-878e-c3e073b33aaa-kube-api-access-q4cns\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.898581 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-utilities\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.899056 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-catalog-content\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.899379 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edc1248-18dd-42c6-878e-c3e073b33aaa-utilities\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:45 crc kubenswrapper[4847]: I0218 00:31:45.920895 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4cns\" (UniqueName: \"kubernetes.io/projected/2edc1248-18dd-42c6-878e-c3e073b33aaa-kube-api-access-q4cns\") pod \"redhat-marketplace-mwtpz\" (UID: \"2edc1248-18dd-42c6-878e-c3e073b33aaa\") " pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.089022 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.554319 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwtpz"] Feb 18 00:31:46 crc kubenswrapper[4847]: W0218 00:31:46.564977 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edc1248_18dd_42c6_878e_c3e073b33aaa.slice/crio-3751224e99e4c7e6ae878ec0286ed915111bd4dd1bd7466d372d99f5e147ff0d WatchSource:0}: Error finding container 3751224e99e4c7e6ae878ec0286ed915111bd4dd1bd7466d372d99f5e147ff0d: Status 404 returned error can't find the container with id 3751224e99e4c7e6ae878ec0286ed915111bd4dd1bd7466d372d99f5e147ff0d Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.661177 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwtpz" event={"ID":"2edc1248-18dd-42c6-878e-c3e073b33aaa","Type":"ContainerStarted","Data":"3751224e99e4c7e6ae878ec0286ed915111bd4dd1bd7466d372d99f5e147ff0d"} Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.731371 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jjhq5"] Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.732809 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.734039 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jjhq5"] Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.736297 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.914825 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx45m\" (UniqueName: \"kubernetes.io/projected/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-kube-api-access-xx45m\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.915651 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-utilities\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:46 crc kubenswrapper[4847]: I0218 00:31:46.915802 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-catalog-content\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.016960 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx45m\" (UniqueName: \"kubernetes.io/projected/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-kube-api-access-xx45m\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.017180 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-utilities\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.017255 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-catalog-content\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.017913 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-utilities\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.018334 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-catalog-content\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.043393 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx45m\" (UniqueName: \"kubernetes.io/projected/b3a5f225-da8f-4b7c-a346-2926b83b1d0f-kube-api-access-xx45m\") pod \"redhat-operators-jjhq5\" (UID: \"b3a5f225-da8f-4b7c-a346-2926b83b1d0f\") " pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.056024 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.294876 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jjhq5"] Feb 18 00:31:47 crc kubenswrapper[4847]: W0218 00:31:47.307263 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3a5f225_da8f_4b7c_a346_2926b83b1d0f.slice/crio-f81783772a1cfef9f42e99a0c4d98e1abe58001986572319365c085f6be64c14 WatchSource:0}: Error finding container f81783772a1cfef9f42e99a0c4d98e1abe58001986572319365c085f6be64c14: Status 404 returned error can't find the container with id f81783772a1cfef9f42e99a0c4d98e1abe58001986572319365c085f6be64c14 Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.667826 4847 generic.go:334] "Generic (PLEG): container finished" podID="2edc1248-18dd-42c6-878e-c3e073b33aaa" containerID="6716faa9ea4d61ef9f4d8bff16f7e1c16cdc44ee1790f1360792448a545d96e6" exitCode=0 Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.667872 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwtpz" event={"ID":"2edc1248-18dd-42c6-878e-c3e073b33aaa","Type":"ContainerDied","Data":"6716faa9ea4d61ef9f4d8bff16f7e1c16cdc44ee1790f1360792448a545d96e6"} Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.679010 4847 generic.go:334] "Generic (PLEG): container finished" podID="b3a5f225-da8f-4b7c-a346-2926b83b1d0f" containerID="5a476019488a664ad751a956843405dfff6ebc68e8f2ad5090802cc7e25c2f0a" exitCode=0 Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.679078 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjhq5" event={"ID":"b3a5f225-da8f-4b7c-a346-2926b83b1d0f","Type":"ContainerDied","Data":"5a476019488a664ad751a956843405dfff6ebc68e8f2ad5090802cc7e25c2f0a"} Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.679214 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjhq5" event={"ID":"b3a5f225-da8f-4b7c-a346-2926b83b1d0f","Type":"ContainerStarted","Data":"f81783772a1cfef9f42e99a0c4d98e1abe58001986572319365c085f6be64c14"} Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.716189 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jcl5t"] Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.717536 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.751115 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jcl5t"] Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828507 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828573 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-registry-certificates\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828602 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-bound-sa-token\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828658 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-trusted-ca\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828683 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e72601f1-ef44-4127-a08d-78accf48dea0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828711 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kff86\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-kube-api-access-kff86\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.828987 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-registry-tls\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.829038 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e72601f1-ef44-4127-a08d-78accf48dea0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.857345 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.930761 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-registry-tls\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.930865 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e72601f1-ef44-4127-a08d-78accf48dea0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.930944 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-registry-certificates\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.930972 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-bound-sa-token\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.931006 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-trusted-ca\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.931081 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e72601f1-ef44-4127-a08d-78accf48dea0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.931125 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kff86\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-kube-api-access-kff86\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.932255 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e72601f1-ef44-4127-a08d-78accf48dea0-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.932914 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-registry-certificates\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.932936 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e72601f1-ef44-4127-a08d-78accf48dea0-trusted-ca\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.940455 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e72601f1-ef44-4127-a08d-78accf48dea0-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.942044 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-registry-tls\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.949214 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kff86\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-kube-api-access-kff86\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:47 crc kubenswrapper[4847]: I0218 00:31:47.952185 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e72601f1-ef44-4127-a08d-78accf48dea0-bound-sa-token\") pod \"image-registry-66df7c8f76-jcl5t\" (UID: \"e72601f1-ef44-4127-a08d-78accf48dea0\") " pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.040705 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.134949 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.136343 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.138889 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.192669 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.235186 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.235263 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.235300 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zjw7\" (UniqueName: \"kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.326948 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jcl5t"] Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.336599 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.336654 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zjw7\" (UniqueName: \"kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.336721 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.337160 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.337175 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: W0218 00:31:48.344091 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode72601f1_ef44_4127_a08d_78accf48dea0.slice/crio-fd6014f2aa4f086572b465070d06a9f706e1d6d3b16950eab799741cbc7468c2 WatchSource:0}: Error finding container fd6014f2aa4f086572b465070d06a9f706e1d6d3b16950eab799741cbc7468c2: Status 404 returned error can't find the container with id fd6014f2aa4f086572b465070d06a9f706e1d6d3b16950eab799741cbc7468c2 Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.368593 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zjw7\" (UniqueName: \"kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7\") pod \"community-operators-pw6fv\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.507720 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.688366 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" event={"ID":"e72601f1-ef44-4127-a08d-78accf48dea0","Type":"ContainerStarted","Data":"eea8de108d3fb0536cdc02005c638ba7333205f83170a23687f360257bb88c13"} Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.688784 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" event={"ID":"e72601f1-ef44-4127-a08d-78accf48dea0","Type":"ContainerStarted","Data":"fd6014f2aa4f086572b465070d06a9f706e1d6d3b16950eab799741cbc7468c2"} Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.689905 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.694716 4847 generic.go:334] "Generic (PLEG): container finished" podID="2edc1248-18dd-42c6-878e-c3e073b33aaa" containerID="2ba59125adc8862d3ce10ed0b227ea8126d5da4ee6188ed38642e84564c1a3c4" exitCode=0 Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.694779 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwtpz" event={"ID":"2edc1248-18dd-42c6-878e-c3e073b33aaa","Type":"ContainerDied","Data":"2ba59125adc8862d3ce10ed0b227ea8126d5da4ee6188ed38642e84564c1a3c4"} Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.716735 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" podStartSLOduration=1.716714957 podStartE2EDuration="1.716714957s" podCreationTimestamp="2026-02-18 00:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:31:48.711290222 +0000 UTC m=+382.088641164" watchObservedRunningTime="2026-02-18 00:31:48.716714957 +0000 UTC m=+382.094065899" Feb 18 00:31:48 crc kubenswrapper[4847]: I0218 00:31:48.778609 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.126817 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.128853 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.131252 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.140564 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.253139 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.253205 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.253280 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4bl\" (UniqueName: \"kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.354697 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.354807 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p4bl\" (UniqueName: \"kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.354841 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.355232 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.355567 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.378565 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p4bl\" (UniqueName: \"kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl\") pod \"certified-operators-vj5w5\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.456359 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.669518 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.704006 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwtpz" event={"ID":"2edc1248-18dd-42c6-878e-c3e073b33aaa","Type":"ContainerStarted","Data":"4d4cdc6b3cd6d4d5eef47dc903eb74cbae3d16e7bc1e6dffc771c757baa14602"} Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.706806 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerStarted","Data":"ab08392ea3145f258988e3425b3ff22cde75a8a99095490df9a37288c0087824"} Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.709829 4847 generic.go:334] "Generic (PLEG): container finished" podID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerID="d3b4c6221acb6113a865ae952e84e43dedb965d974a175c5cfe3a0bd34efd0c3" exitCode=0 Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.709908 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerDied","Data":"d3b4c6221acb6113a865ae952e84e43dedb965d974a175c5cfe3a0bd34efd0c3"} Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.709937 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerStarted","Data":"7844e7b374a3e73684bf73c46f79a399cf15d7575d1fa5073fe777a50d1e2921"} Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.718709 4847 generic.go:334] "Generic (PLEG): container finished" podID="b3a5f225-da8f-4b7c-a346-2926b83b1d0f" containerID="09720ec3e9e04e67128b5ba45787cca7a8e8f8fdcd49f9c7b59aa6b48b55f0fb" exitCode=0 Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.719977 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjhq5" event={"ID":"b3a5f225-da8f-4b7c-a346-2926b83b1d0f","Type":"ContainerDied","Data":"09720ec3e9e04e67128b5ba45787cca7a8e8f8fdcd49f9c7b59aa6b48b55f0fb"} Feb 18 00:31:49 crc kubenswrapper[4847]: I0218 00:31:49.755139 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwtpz" podStartSLOduration=3.166312619 podStartE2EDuration="4.755118482s" podCreationTimestamp="2026-02-18 00:31:45 +0000 UTC" firstStartedPulling="2026-02-18 00:31:47.669381331 +0000 UTC m=+381.046732283" lastFinishedPulling="2026-02-18 00:31:49.258187204 +0000 UTC m=+382.635538146" observedRunningTime="2026-02-18 00:31:49.729064077 +0000 UTC m=+383.106415039" watchObservedRunningTime="2026-02-18 00:31:49.755118482 +0000 UTC m=+383.132469424" Feb 18 00:31:50 crc kubenswrapper[4847]: I0218 00:31:50.726884 4847 generic.go:334] "Generic (PLEG): container finished" podID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerID="defa891a2176ab73ebb44edf5294eca525fd46b7634e7cc66f12e8363f9ba1e7" exitCode=0 Feb 18 00:31:50 crc kubenswrapper[4847]: I0218 00:31:50.726967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerDied","Data":"defa891a2176ab73ebb44edf5294eca525fd46b7634e7cc66f12e8363f9ba1e7"} Feb 18 00:31:50 crc kubenswrapper[4847]: I0218 00:31:50.730527 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerStarted","Data":"b52d45cde22006108e5e12e4180d97d7a8505837e056dfc5fd66d94a88340d97"} Feb 18 00:31:50 crc kubenswrapper[4847]: I0218 00:31:50.734297 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jjhq5" event={"ID":"b3a5f225-da8f-4b7c-a346-2926b83b1d0f","Type":"ContainerStarted","Data":"c07e4ea7b47e244a21cc7df55313a17979daf2c2d39d5bf1ae9c7aa61c4d2675"} Feb 18 00:31:50 crc kubenswrapper[4847]: I0218 00:31:50.773555 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jjhq5" podStartSLOduration=2.341586668 podStartE2EDuration="4.773532966s" podCreationTimestamp="2026-02-18 00:31:46 +0000 UTC" firstStartedPulling="2026-02-18 00:31:47.682899294 +0000 UTC m=+381.060250236" lastFinishedPulling="2026-02-18 00:31:50.114845592 +0000 UTC m=+383.492196534" observedRunningTime="2026-02-18 00:31:50.771122083 +0000 UTC m=+384.148473025" watchObservedRunningTime="2026-02-18 00:31:50.773532966 +0000 UTC m=+384.150883908" Feb 18 00:31:51 crc kubenswrapper[4847]: I0218 00:31:51.745983 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerStarted","Data":"71db3d6812f687e6fa06303bf410c9b7ec3934fc26b103a360faad9c3f3fdda4"} Feb 18 00:31:51 crc kubenswrapper[4847]: I0218 00:31:51.751757 4847 generic.go:334] "Generic (PLEG): container finished" podID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerID="b52d45cde22006108e5e12e4180d97d7a8505837e056dfc5fd66d94a88340d97" exitCode=0 Feb 18 00:31:51 crc kubenswrapper[4847]: I0218 00:31:51.752659 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerDied","Data":"b52d45cde22006108e5e12e4180d97d7a8505837e056dfc5fd66d94a88340d97"} Feb 18 00:31:52 crc kubenswrapper[4847]: I0218 00:31:52.730248 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:31:52 crc kubenswrapper[4847]: I0218 00:31:52.734446 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" podUID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" containerName="controller-manager" containerID="cri-o://e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53" gracePeriod=30 Feb 18 00:31:52 crc kubenswrapper[4847]: I0218 00:31:52.759638 4847 generic.go:334] "Generic (PLEG): container finished" podID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerID="71db3d6812f687e6fa06303bf410c9b7ec3934fc26b103a360faad9c3f3fdda4" exitCode=0 Feb 18 00:31:52 crc kubenswrapper[4847]: I0218 00:31:52.759690 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerDied","Data":"71db3d6812f687e6fa06303bf410c9b7ec3934fc26b103a360faad9c3f3fdda4"} Feb 18 00:31:52 crc kubenswrapper[4847]: E0218 00:31:52.867037 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380cc2f5_0ff8_4c80_8ebe_5afc80e47acf.slice/crio-e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380cc2f5_0ff8_4c80_8ebe_5afc80e47acf.slice/crio-conmon-e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.117802 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.151287 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6466b54c89-xhxqc"] Feb 18 00:31:53 crc kubenswrapper[4847]: E0218 00:31:53.156145 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" containerName="controller-manager" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.156211 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" containerName="controller-manager" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.156685 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" containerName="controller-manager" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.157566 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.203822 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b54c89-xhxqc"] Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.236057 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca\") pod \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.236155 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert\") pod \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.236198 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config\") pod \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.236264 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmbsv\" (UniqueName: \"kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv\") pod \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.236403 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles\") pod \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\" (UID: \"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf\") " Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.238524 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" (UID: "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.238670 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config" (OuterVolumeSpecName: "config") pod "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" (UID: "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.239483 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca" (OuterVolumeSpecName: "client-ca") pod "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" (UID: "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.248724 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv" (OuterVolumeSpecName: "kube-api-access-bmbsv") pod "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" (UID: "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf"). InnerVolumeSpecName "kube-api-access-bmbsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.254839 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" (UID: "380cc2f5-0ff8-4c80-8ebe-5afc80e47acf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.340290 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-config\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.340358 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7c2536-4379-4301-8586-903c126e31bb-serving-cert\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.340448 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-proxy-ca-bundles\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.340681 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62gws\" (UniqueName: \"kubernetes.io/projected/da7c2536-4379-4301-8586-903c126e31bb-kube-api-access-62gws\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.340712 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-client-ca\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.341075 4847 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.341123 4847 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.341137 4847 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.341149 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.341163 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmbsv\" (UniqueName: \"kubernetes.io/projected/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf-kube-api-access-bmbsv\") on node \"crc\" DevicePath \"\"" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.442847 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-client-ca\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.442907 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62gws\" (UniqueName: \"kubernetes.io/projected/da7c2536-4379-4301-8586-903c126e31bb-kube-api-access-62gws\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.443028 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-config\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.443055 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7c2536-4379-4301-8586-903c126e31bb-serving-cert\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.443089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-proxy-ca-bundles\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.444327 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-client-ca\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.444461 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-proxy-ca-bundles\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.445627 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7c2536-4379-4301-8586-903c126e31bb-config\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.448216 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da7c2536-4379-4301-8586-903c126e31bb-serving-cert\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.461892 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62gws\" (UniqueName: \"kubernetes.io/projected/da7c2536-4379-4301-8586-903c126e31bb-kube-api-access-62gws\") pod \"controller-manager-6466b54c89-xhxqc\" (UID: \"da7c2536-4379-4301-8586-903c126e31bb\") " pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.491541 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.491652 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.491719 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.492550 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.492642 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75" gracePeriod=600 Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.509082 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.741396 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6466b54c89-xhxqc"] Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.780795 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerStarted","Data":"e40a400304989e35321f4aa181b4a56f0d51368ba50e2e88a870865fd4ef951b"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.782691 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" event={"ID":"da7c2536-4379-4301-8586-903c126e31bb","Type":"ContainerStarted","Data":"42054be1f9e09b8103e7f447b0919a12ce6d1f9c2a2200a1b601c2e73e51a748"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.784205 4847 generic.go:334] "Generic (PLEG): container finished" podID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" containerID="e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53" exitCode=0 Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.784265 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" event={"ID":"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf","Type":"ContainerDied","Data":"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.784317 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" event={"ID":"380cc2f5-0ff8-4c80-8ebe-5afc80e47acf","Type":"ContainerDied","Data":"22d4b725545fb1a79abc851cbaf74a08619bef550f015786d70d98020f1fcd91"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.784336 4847 scope.go:117] "RemoveContainer" containerID="e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.784388 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6889d7b855-r6nw4" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.792187 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerStarted","Data":"916c16f515c925491723ea2faf51ec0f63fa990e9f57f0c15c884855b4a116c5"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.795431 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75" exitCode=0 Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.795468 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75"} Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.812748 4847 scope.go:117] "RemoveContainer" containerID="e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53" Feb 18 00:31:53 crc kubenswrapper[4847]: E0218 00:31:53.816010 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53\": container with ID starting with e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53 not found: ID does not exist" containerID="e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.816066 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53"} err="failed to get container status \"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53\": rpc error: code = NotFound desc = could not find container \"e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53\": container with ID starting with e767e780fd3559b53ed49be28b22fea8faba825f973a1924e542f1dcd86aab53 not found: ID does not exist" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.816102 4847 scope.go:117] "RemoveContainer" containerID="21c935ca9c8e2ee24068070e45953a236b1e5a57c92d0e5b4f033ed0aeab7831" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.817972 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vj5w5" podStartSLOduration=2.290755573 podStartE2EDuration="4.817941888s" podCreationTimestamp="2026-02-18 00:31:49 +0000 UTC" firstStartedPulling="2026-02-18 00:31:50.72908611 +0000 UTC m=+384.106437052" lastFinishedPulling="2026-02-18 00:31:53.256272425 +0000 UTC m=+386.633623367" observedRunningTime="2026-02-18 00:31:53.809448439 +0000 UTC m=+387.186799381" watchObservedRunningTime="2026-02-18 00:31:53.817941888 +0000 UTC m=+387.195292830" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.835228 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pw6fv" podStartSLOduration=2.750535654 podStartE2EDuration="5.835207425s" podCreationTimestamp="2026-02-18 00:31:48 +0000 UTC" firstStartedPulling="2026-02-18 00:31:49.712082998 +0000 UTC m=+383.089433930" lastFinishedPulling="2026-02-18 00:31:52.796754759 +0000 UTC m=+386.174105701" observedRunningTime="2026-02-18 00:31:53.832329437 +0000 UTC m=+387.209680389" watchObservedRunningTime="2026-02-18 00:31:53.835207425 +0000 UTC m=+387.212558367" Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.857432 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:31:53 crc kubenswrapper[4847]: I0218 00:31:53.860885 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6889d7b855-r6nw4"] Feb 18 00:31:54 crc kubenswrapper[4847]: I0218 00:31:54.805307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" event={"ID":"da7c2536-4379-4301-8586-903c126e31bb","Type":"ContainerStarted","Data":"23b1667d909d1935a59992b19ef7db1bd20fec6c31cf3f2b6f2c03a128717069"} Feb 18 00:31:54 crc kubenswrapper[4847]: I0218 00:31:54.805871 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:54 crc kubenswrapper[4847]: I0218 00:31:54.812035 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" Feb 18 00:31:54 crc kubenswrapper[4847]: I0218 00:31:54.859743 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6466b54c89-xhxqc" podStartSLOduration=2.859715375 podStartE2EDuration="2.859715375s" podCreationTimestamp="2026-02-18 00:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:31:54.833959729 +0000 UTC m=+388.211310671" watchObservedRunningTime="2026-02-18 00:31:54.859715375 +0000 UTC m=+388.237066317" Feb 18 00:31:55 crc kubenswrapper[4847]: I0218 00:31:55.412721 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="380cc2f5-0ff8-4c80-8ebe-5afc80e47acf" path="/var/lib/kubelet/pods/380cc2f5-0ff8-4c80-8ebe-5afc80e47acf/volumes" Feb 18 00:31:55 crc kubenswrapper[4847]: I0218 00:31:55.814882 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17"} Feb 18 00:31:56 crc kubenswrapper[4847]: I0218 00:31:56.089491 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:56 crc kubenswrapper[4847]: I0218 00:31:56.089561 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:56 crc kubenswrapper[4847]: I0218 00:31:56.154321 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:56 crc kubenswrapper[4847]: I0218 00:31:56.871458 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwtpz" Feb 18 00:31:57 crc kubenswrapper[4847]: I0218 00:31:57.057954 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:57 crc kubenswrapper[4847]: I0218 00:31:57.058024 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:31:58 crc kubenswrapper[4847]: I0218 00:31:58.098570 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jjhq5" podUID="b3a5f225-da8f-4b7c-a346-2926b83b1d0f" containerName="registry-server" probeResult="failure" output=< Feb 18 00:31:58 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:31:58 crc kubenswrapper[4847]: > Feb 18 00:31:58 crc kubenswrapper[4847]: I0218 00:31:58.507990 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:58 crc kubenswrapper[4847]: I0218 00:31:58.508842 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:58 crc kubenswrapper[4847]: I0218 00:31:58.579416 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:58 crc kubenswrapper[4847]: I0218 00:31:58.883652 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 00:31:59 crc kubenswrapper[4847]: I0218 00:31:59.456946 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:59 crc kubenswrapper[4847]: I0218 00:31:59.457512 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:59 crc kubenswrapper[4847]: I0218 00:31:59.504562 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:31:59 crc kubenswrapper[4847]: I0218 00:31:59.893799 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 00:32:07 crc kubenswrapper[4847]: I0218 00:32:07.098909 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:32:07 crc kubenswrapper[4847]: I0218 00:32:07.145160 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jjhq5" Feb 18 00:32:08 crc kubenswrapper[4847]: I0218 00:32:08.048846 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jcl5t" Feb 18 00:32:08 crc kubenswrapper[4847]: I0218 00:32:08.124064 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.166927 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" podUID="3f4c85a9-c568-472e-b05b-546a70da9391" containerName="registry" containerID="cri-o://b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b" gracePeriod=30 Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.664628 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.759782 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.759864 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.759928 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpqbt\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.759993 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.760052 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.760209 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.760299 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.760326 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates\") pod \"3f4c85a9-c568-472e-b05b-546a70da9391\" (UID: \"3f4c85a9-c568-472e-b05b-546a70da9391\") " Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.767708 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.770032 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.776145 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt" (OuterVolumeSpecName: "kube-api-access-kpqbt") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "kube-api-access-kpqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.776733 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.778362 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.778490 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.793034 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.805769 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3f4c85a9-c568-472e-b05b-546a70da9391" (UID: "3f4c85a9-c568-472e-b05b-546a70da9391"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862682 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpqbt\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-kube-api-access-kpqbt\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862743 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862767 4847 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3f4c85a9-c568-472e-b05b-546a70da9391-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862828 4847 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862850 4847 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3f4c85a9-c568-472e-b05b-546a70da9391-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862870 4847 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3f4c85a9-c568-472e-b05b-546a70da9391-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:33 crc kubenswrapper[4847]: I0218 00:32:33.862894 4847 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3f4c85a9-c568-472e-b05b-546a70da9391-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.104825 4847 generic.go:334] "Generic (PLEG): container finished" podID="3f4c85a9-c568-472e-b05b-546a70da9391" containerID="b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b" exitCode=0 Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.104902 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" event={"ID":"3f4c85a9-c568-472e-b05b-546a70da9391","Type":"ContainerDied","Data":"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b"} Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.104954 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" event={"ID":"3f4c85a9-c568-472e-b05b-546a70da9391","Type":"ContainerDied","Data":"8d982241b49c3aa1d2ab35918d57a637eba0010b77172c6c9d81b9577a727aa3"} Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.104975 4847 scope.go:117] "RemoveContainer" containerID="b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b" Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.104986 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9jnmn" Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.134431 4847 scope.go:117] "RemoveContainer" containerID="b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b" Feb 18 00:32:34 crc kubenswrapper[4847]: E0218 00:32:34.135441 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b\": container with ID starting with b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b not found: ID does not exist" containerID="b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b" Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.135736 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b"} err="failed to get container status \"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b\": rpc error: code = NotFound desc = could not find container \"b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b\": container with ID starting with b10de552a1ce08d9661a1b8f0f40f4a275aca54ff750d2fdd07809a9d219941b not found: ID does not exist" Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.153782 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:32:34 crc kubenswrapper[4847]: I0218 00:32:34.158847 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9jnmn"] Feb 18 00:32:35 crc kubenswrapper[4847]: I0218 00:32:35.418648 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4c85a9-c568-472e-b05b-546a70da9391" path="/var/lib/kubelet/pods/3f4c85a9-c568-472e-b05b-546a70da9391/volumes" Feb 18 00:34:23 crc kubenswrapper[4847]: I0218 00:34:23.491365 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:34:23 crc kubenswrapper[4847]: I0218 00:34:23.492320 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:34:27 crc kubenswrapper[4847]: I0218 00:34:27.693626 4847 scope.go:117] "RemoveContainer" containerID="83019b7cc4a0cc21da344585024440aa2ce7b1dcfdde7881ef39358e4cda322a" Feb 18 00:34:27 crc kubenswrapper[4847]: I0218 00:34:27.724716 4847 scope.go:117] "RemoveContainer" containerID="96a495bf2c9adb1962cd35cf5fe155423f82659af87ceee478f2ff2291b7bd6f" Feb 18 00:34:53 crc kubenswrapper[4847]: I0218 00:34:53.492413 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:34:53 crc kubenswrapper[4847]: I0218 00:34:53.493868 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.492385 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.493400 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.493481 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.494277 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.494383 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17" gracePeriod=600 Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.690617 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17" exitCode=0 Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.690708 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17"} Feb 18 00:35:23 crc kubenswrapper[4847]: I0218 00:35:23.690983 4847 scope.go:117] "RemoveContainer" containerID="2d48a1afcf940f6238028cb74fe52ba15e293dc18434794ab21f623d2d49cf75" Feb 18 00:35:24 crc kubenswrapper[4847]: I0218 00:35:24.701944 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d"} Feb 18 00:35:27 crc kubenswrapper[4847]: I0218 00:35:27.792830 4847 scope.go:117] "RemoveContainer" containerID="158b1aa6fdc6be9abadd1dbbf255249e5b1875d921952319b76e99629704d10a" Feb 18 00:35:27 crc kubenswrapper[4847]: I0218 00:35:27.830697 4847 scope.go:117] "RemoveContainer" containerID="b0d4906f150fc21daea04395ce097f94f660e5fff0f5e85616e0f20a9f2a362f" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.627264 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954"] Feb 18 00:36:43 crc kubenswrapper[4847]: E0218 00:36:43.629099 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4c85a9-c568-472e-b05b-546a70da9391" containerName="registry" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.629205 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4c85a9-c568-472e-b05b-546a70da9391" containerName="registry" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.629453 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4c85a9-c568-472e-b05b-546a70da9391" containerName="registry" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.630463 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.633892 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.645532 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954"] Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.670805 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.671100 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.671156 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtnp\" (UniqueName: \"kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.772065 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.772183 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.772212 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghtnp\" (UniqueName: \"kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.772766 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.772921 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.802575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghtnp\" (UniqueName: \"kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:43 crc kubenswrapper[4847]: I0218 00:36:43.965774 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:44 crc kubenswrapper[4847]: I0218 00:36:44.236858 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954"] Feb 18 00:36:44 crc kubenswrapper[4847]: I0218 00:36:44.316638 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" event={"ID":"52b44016-fa7b-4c2a-8071-d4406928c47b","Type":"ContainerStarted","Data":"365b930aa4e483ff2665939e8ddd8e06b15b3fc2477c5d0a07b5959c29bf0562"} Feb 18 00:36:45 crc kubenswrapper[4847]: I0218 00:36:45.325458 4847 generic.go:334] "Generic (PLEG): container finished" podID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerID="fca9387f49eedf2c7c3331be09f3e8c5878882ccdacfc98a3c47232b04e06150" exitCode=0 Feb 18 00:36:45 crc kubenswrapper[4847]: I0218 00:36:45.325518 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" event={"ID":"52b44016-fa7b-4c2a-8071-d4406928c47b","Type":"ContainerDied","Data":"fca9387f49eedf2c7c3331be09f3e8c5878882ccdacfc98a3c47232b04e06150"} Feb 18 00:36:45 crc kubenswrapper[4847]: I0218 00:36:45.327576 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:36:47 crc kubenswrapper[4847]: I0218 00:36:47.353547 4847 generic.go:334] "Generic (PLEG): container finished" podID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerID="0c889131be9ea89f824450ce3995791acaa56bced41f54b298c6bd0deef3f97d" exitCode=0 Feb 18 00:36:47 crc kubenswrapper[4847]: I0218 00:36:47.353673 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" event={"ID":"52b44016-fa7b-4c2a-8071-d4406928c47b","Type":"ContainerDied","Data":"0c889131be9ea89f824450ce3995791acaa56bced41f54b298c6bd0deef3f97d"} Feb 18 00:36:48 crc kubenswrapper[4847]: I0218 00:36:48.363492 4847 generic.go:334] "Generic (PLEG): container finished" podID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerID="6176995e538706fbd4ab3e61024a4e4912292ebda073f88ea9319205c1295f9d" exitCode=0 Feb 18 00:36:48 crc kubenswrapper[4847]: I0218 00:36:48.363572 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" event={"ID":"52b44016-fa7b-4c2a-8071-d4406928c47b","Type":"ContainerDied","Data":"6176995e538706fbd4ab3e61024a4e4912292ebda073f88ea9319205c1295f9d"} Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.753365 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.957509 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle\") pod \"52b44016-fa7b-4c2a-8071-d4406928c47b\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.957674 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghtnp\" (UniqueName: \"kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp\") pod \"52b44016-fa7b-4c2a-8071-d4406928c47b\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.957757 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util\") pod \"52b44016-fa7b-4c2a-8071-d4406928c47b\" (UID: \"52b44016-fa7b-4c2a-8071-d4406928c47b\") " Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.962526 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle" (OuterVolumeSpecName: "bundle") pod "52b44016-fa7b-4c2a-8071-d4406928c47b" (UID: "52b44016-fa7b-4c2a-8071-d4406928c47b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.966334 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp" (OuterVolumeSpecName: "kube-api-access-ghtnp") pod "52b44016-fa7b-4c2a-8071-d4406928c47b" (UID: "52b44016-fa7b-4c2a-8071-d4406928c47b"). InnerVolumeSpecName "kube-api-access-ghtnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:36:49 crc kubenswrapper[4847]: I0218 00:36:49.979550 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util" (OuterVolumeSpecName: "util") pod "52b44016-fa7b-4c2a-8071-d4406928c47b" (UID: "52b44016-fa7b-4c2a-8071-d4406928c47b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.060197 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.060267 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52b44016-fa7b-4c2a-8071-d4406928c47b-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.060289 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghtnp\" (UniqueName: \"kubernetes.io/projected/52b44016-fa7b-4c2a-8071-d4406928c47b-kube-api-access-ghtnp\") on node \"crc\" DevicePath \"\"" Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.383378 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" event={"ID":"52b44016-fa7b-4c2a-8071-d4406928c47b","Type":"ContainerDied","Data":"365b930aa4e483ff2665939e8ddd8e06b15b3fc2477c5d0a07b5959c29bf0562"} Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.383754 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365b930aa4e483ff2665939e8ddd8e06b15b3fc2477c5d0a07b5959c29bf0562" Feb 18 00:36:50 crc kubenswrapper[4847]: I0218 00:36:50.383475 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.579778 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd"] Feb 18 00:37:01 crc kubenswrapper[4847]: E0218 00:37:01.580504 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="extract" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.580519 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="extract" Feb 18 00:37:01 crc kubenswrapper[4847]: E0218 00:37:01.580538 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="pull" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.580547 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="pull" Feb 18 00:37:01 crc kubenswrapper[4847]: E0218 00:37:01.580566 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="util" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.580577 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="util" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.580723 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="52b44016-fa7b-4c2a-8071-d4406928c47b" containerName="extract" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.581135 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.583012 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.583277 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.586246 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-zmxj4" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.599651 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.615521 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.616497 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.618488 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.618696 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-m4jrw" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.624178 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.624239 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.624306 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9gds\" (UniqueName: \"kubernetes.io/projected/6d5be12f-bed3-4a23-aa85-f0a08a5fc046-kube-api-access-s9gds\") pod \"obo-prometheus-operator-68bc856cb9-9jgmd\" (UID: \"6d5be12f-bed3-4a23-aa85-f0a08a5fc046\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.669947 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.686941 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.688642 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.702658 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.725224 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9gds\" (UniqueName: \"kubernetes.io/projected/6d5be12f-bed3-4a23-aa85-f0a08a5fc046-kube-api-access-s9gds\") pod \"obo-prometheus-operator-68bc856cb9-9jgmd\" (UID: \"6d5be12f-bed3-4a23-aa85-f0a08a5fc046\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.725282 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.725315 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.731045 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.731527 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9a167167-ef99-4088-bf22-f10acba5f1c1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s\" (UID: \"9a167167-ef99-4088-bf22-f10acba5f1c1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.742270 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9gds\" (UniqueName: \"kubernetes.io/projected/6d5be12f-bed3-4a23-aa85-f0a08a5fc046-kube-api-access-s9gds\") pod \"obo-prometheus-operator-68bc856cb9-9jgmd\" (UID: \"6d5be12f-bed3-4a23-aa85-f0a08a5fc046\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.813662 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kxn6x"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.814328 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.815905 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-b44x5" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.816113 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.830064 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.830186 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.830471 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kxn6x"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.902070 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.934435 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8s6p\" (UniqueName: \"kubernetes.io/projected/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-kube-api-access-c8s6p\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.934847 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.934904 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.934935 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.941387 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.947166 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c31e5d6e-6fa4-4dfb-bbef-70effd832c70-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c\" (UID: \"c31e5d6e-6fa4-4dfb-bbef-70effd832c70\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.953912 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.978879 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jx28n"] Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.992389 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.995500 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-lgxsf" Feb 18 00:37:01 crc kubenswrapper[4847]: I0218 00:37:01.996627 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jx28n"] Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.017105 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.036225 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.036307 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8s6p\" (UniqueName: \"kubernetes.io/projected/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-kube-api-access-c8s6p\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.045819 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.068543 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8s6p\" (UniqueName: \"kubernetes.io/projected/f781a655-8f6a-4fe4-a3e8-306cd263c8f8-kube-api-access-c8s6p\") pod \"observability-operator-59bdc8b94-kxn6x\" (UID: \"f781a655-8f6a-4fe4-a3e8-306cd263c8f8\") " pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.137823 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw9f\" (UniqueName: \"kubernetes.io/projected/90db687a-cb80-4d17-848c-f4a28348db36-kube-api-access-fqw9f\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.137888 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90db687a-cb80-4d17-848c-f4a28348db36-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.139029 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.239489 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqw9f\" (UniqueName: \"kubernetes.io/projected/90db687a-cb80-4d17-848c-f4a28348db36-kube-api-access-fqw9f\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.239561 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90db687a-cb80-4d17-848c-f4a28348db36-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.240460 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/90db687a-cb80-4d17-848c-f4a28348db36-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.252655 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s"] Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.265699 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqw9f\" (UniqueName: \"kubernetes.io/projected/90db687a-cb80-4d17-848c-f4a28348db36-kube-api-access-fqw9f\") pod \"perses-operator-5bf474d74f-jx28n\" (UID: \"90db687a-cb80-4d17-848c-f4a28348db36\") " pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: W0218 00:37:02.297633 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a167167_ef99_4088_bf22_f10acba5f1c1.slice/crio-97014ae88fdc3b18e7805b04c53e630909cc89353fe15a83b6cc1dc7c81d61cf WatchSource:0}: Error finding container 97014ae88fdc3b18e7805b04c53e630909cc89353fe15a83b6cc1dc7c81d61cf: Status 404 returned error can't find the container with id 97014ae88fdc3b18e7805b04c53e630909cc89353fe15a83b6cc1dc7c81d61cf Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.317829 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.409227 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c"] Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.446239 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kxn6x"] Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.454630 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" event={"ID":"9a167167-ef99-4088-bf22-f10acba5f1c1","Type":"ContainerStarted","Data":"97014ae88fdc3b18e7805b04c53e630909cc89353fe15a83b6cc1dc7c81d61cf"} Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.455716 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" event={"ID":"c31e5d6e-6fa4-4dfb-bbef-70effd832c70","Type":"ContainerStarted","Data":"37ac93be9b5371e99cc7c3a574ad84a5269749379fe2a59591ad1e7da33b7c99"} Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.492207 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd"] Feb 18 00:37:02 crc kubenswrapper[4847]: W0218 00:37:02.500280 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d5be12f_bed3_4a23_aa85_f0a08a5fc046.slice/crio-722d1043aeb26e488f1fd4773d7ae2e0a9de4f2335a7d389a3745fecfb35a2b9 WatchSource:0}: Error finding container 722d1043aeb26e488f1fd4773d7ae2e0a9de4f2335a7d389a3745fecfb35a2b9: Status 404 returned error can't find the container with id 722d1043aeb26e488f1fd4773d7ae2e0a9de4f2335a7d389a3745fecfb35a2b9 Feb 18 00:37:02 crc kubenswrapper[4847]: I0218 00:37:02.543948 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jx28n"] Feb 18 00:37:02 crc kubenswrapper[4847]: W0218 00:37:02.553481 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90db687a_cb80_4d17_848c_f4a28348db36.slice/crio-1ab04f8590afb5b8a3f15dbd08193a67bd795c4a0d563301b634d759a7f36cf9 WatchSource:0}: Error finding container 1ab04f8590afb5b8a3f15dbd08193a67bd795c4a0d563301b634d759a7f36cf9: Status 404 returned error can't find the container with id 1ab04f8590afb5b8a3f15dbd08193a67bd795c4a0d563301b634d759a7f36cf9 Feb 18 00:37:03 crc kubenswrapper[4847]: I0218 00:37:03.465947 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" event={"ID":"6d5be12f-bed3-4a23-aa85-f0a08a5fc046","Type":"ContainerStarted","Data":"722d1043aeb26e488f1fd4773d7ae2e0a9de4f2335a7d389a3745fecfb35a2b9"} Feb 18 00:37:03 crc kubenswrapper[4847]: I0218 00:37:03.468654 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" event={"ID":"90db687a-cb80-4d17-848c-f4a28348db36","Type":"ContainerStarted","Data":"1ab04f8590afb5b8a3f15dbd08193a67bd795c4a0d563301b634d759a7f36cf9"} Feb 18 00:37:03 crc kubenswrapper[4847]: I0218 00:37:03.470876 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" event={"ID":"f781a655-8f6a-4fe4-a3e8-306cd263c8f8","Type":"ContainerStarted","Data":"3bf6ca5f3e1eb13a94d8dbd515f1a13616fbb1a1d3543d0aaa57dc8a30a36409"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.551476 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" event={"ID":"9a167167-ef99-4088-bf22-f10acba5f1c1","Type":"ContainerStarted","Data":"a408487d255f4a52caaf436535df135442f3b91d6c5d18e69380bee409aebc12"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.552914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" event={"ID":"6d5be12f-bed3-4a23-aa85-f0a08a5fc046","Type":"ContainerStarted","Data":"5622ba10407232ce5fe44e6184e925479469f747f1516cef4fb4fd3a93283beb"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.554244 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" event={"ID":"90db687a-cb80-4d17-848c-f4a28348db36","Type":"ContainerStarted","Data":"d4958d8ddaf794b3bc5a6244a591f39ee086461d150ccc5ac299735ac00c3392"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.554707 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.555989 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" event={"ID":"f781a655-8f6a-4fe4-a3e8-306cd263c8f8","Type":"ContainerStarted","Data":"f60defbce75aca7130f197ae29482c2f4636a9bf3b9440a5e4b4563228d6939e"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.556775 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.558590 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" event={"ID":"c31e5d6e-6fa4-4dfb-bbef-70effd832c70","Type":"ContainerStarted","Data":"a56a9e01093a778a82728fc822512ccf2373b10b2e76c96284ede44ed6628279"} Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.559321 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.574165 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s" podStartSLOduration=2.287820338 podStartE2EDuration="14.574143554s" podCreationTimestamp="2026-02-18 00:37:01 +0000 UTC" firstStartedPulling="2026-02-18 00:37:02.299316371 +0000 UTC m=+695.676667313" lastFinishedPulling="2026-02-18 00:37:14.585639587 +0000 UTC m=+707.962990529" observedRunningTime="2026-02-18 00:37:15.573505438 +0000 UTC m=+708.950856380" watchObservedRunningTime="2026-02-18 00:37:15.574143554 +0000 UTC m=+708.951494526" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.622574 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c" podStartSLOduration=2.45334949 podStartE2EDuration="14.622555482s" podCreationTimestamp="2026-02-18 00:37:01 +0000 UTC" firstStartedPulling="2026-02-18 00:37:02.440305335 +0000 UTC m=+695.817656277" lastFinishedPulling="2026-02-18 00:37:14.609511327 +0000 UTC m=+707.986862269" observedRunningTime="2026-02-18 00:37:15.617931874 +0000 UTC m=+708.995282836" watchObservedRunningTime="2026-02-18 00:37:15.622555482 +0000 UTC m=+708.999906444" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.644779 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kxn6x" podStartSLOduration=2.419422082 podStartE2EDuration="14.644761269s" podCreationTimestamp="2026-02-18 00:37:01 +0000 UTC" firstStartedPulling="2026-02-18 00:37:02.458963462 +0000 UTC m=+695.836314404" lastFinishedPulling="2026-02-18 00:37:14.684302649 +0000 UTC m=+708.061653591" observedRunningTime="2026-02-18 00:37:15.642808 +0000 UTC m=+709.020158942" watchObservedRunningTime="2026-02-18 00:37:15.644761269 +0000 UTC m=+709.022112211" Feb 18 00:37:15 crc kubenswrapper[4847]: I0218 00:37:15.667709 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" podStartSLOduration=2.646302502 podStartE2EDuration="14.667689446s" podCreationTimestamp="2026-02-18 00:37:01 +0000 UTC" firstStartedPulling="2026-02-18 00:37:02.556427043 +0000 UTC m=+695.933777975" lastFinishedPulling="2026-02-18 00:37:14.577813977 +0000 UTC m=+707.955164919" observedRunningTime="2026-02-18 00:37:15.665757386 +0000 UTC m=+709.043108328" watchObservedRunningTime="2026-02-18 00:37:15.667689446 +0000 UTC m=+709.045040398" Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.698063 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-9jgmd" podStartSLOduration=4.572003103 podStartE2EDuration="16.698031582s" podCreationTimestamp="2026-02-18 00:37:01 +0000 UTC" firstStartedPulling="2026-02-18 00:37:02.515701422 +0000 UTC m=+695.893052364" lastFinishedPulling="2026-02-18 00:37:14.641729901 +0000 UTC m=+708.019080843" observedRunningTime="2026-02-18 00:37:15.697504698 +0000 UTC m=+709.074855640" watchObservedRunningTime="2026-02-18 00:37:17.698031582 +0000 UTC m=+711.075382564" Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.706776 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxm6w"] Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.708632 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-controller" containerID="cri-o://3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.708858 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="northd" containerID="cri-o://efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.708960 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.709052 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-node" containerID="cri-o://9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.709138 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-acl-logging" containerID="cri-o://d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.709245 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="sbdb" containerID="cri-o://9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.709338 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="nbdb" containerID="cri-o://cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" gracePeriod=30 Feb 18 00:37:17 crc kubenswrapper[4847]: I0218 00:37:17.764995 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" containerID="cri-o://fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" gracePeriod=30 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.080303 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/3.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.081773 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovn-acl-logging/0.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.086003 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovn-controller/0.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.089964 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.102631 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e5946b_870b_46f1_8923_4a8abd64da45.slice/crio-efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e5946b_870b_46f1_8923_4a8abd64da45.slice/crio-conmon-cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e5946b_870b_46f1_8923_4a8abd64da45.slice/crio-conmon-9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86e5946b_870b_46f1_8923_4a8abd64da45.slice/crio-conmon-efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.196398 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.196539 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.196903 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.196999 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197022 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197069 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197088 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log" (OuterVolumeSpecName: "node-log") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197103 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197141 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197174 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197169 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash" (OuterVolumeSpecName: "host-slash") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197222 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197245 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197275 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197269 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197313 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197333 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197358 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197365 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197381 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197394 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197407 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197421 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197429 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197448 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197476 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197499 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197503 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197529 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket" (OuterVolumeSpecName: "log-socket") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197543 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjwgx\" (UniqueName: \"kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197556 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197564 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch\") pod \"86e5946b-870b-46f1-8923-4a8abd64da45\" (UID: \"86e5946b-870b-46f1-8923-4a8abd64da45\") " Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197578 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197945 4847 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197962 4847 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197971 4847 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197979 4847 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197990 4847 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.197999 4847 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198007 4847 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198015 4847 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198023 4847 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198022 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198032 4847 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198048 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198055 4847 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198069 4847 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198082 4847 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198163 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.198765 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.204152 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.209732 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx" (OuterVolumeSpecName: "kube-api-access-fjwgx") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "kube-api-access-fjwgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.214949 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hrnp7"] Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215243 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215281 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215295 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215302 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215309 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="nbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215316 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="nbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215336 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="northd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215342 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="northd" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215353 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215363 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215371 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215378 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215385 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-node" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215393 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-node" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215419 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kubecfg-setup" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215425 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kubecfg-setup" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215432 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-acl-logging" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215437 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-acl-logging" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215443 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215449 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215457 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="sbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215463 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="sbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215579 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="northd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215589 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-node" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215617 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215626 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215636 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="nbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215643 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215656 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215663 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215671 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="sbdb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215696 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovn-acl-logging" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215800 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215807 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.215818 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215823 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215944 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.215953 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" containerName="ovnkube-controller" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.224725 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "86e5946b-870b-46f1-8923-4a8abd64da45" (UID: "86e5946b-870b-46f1-8923-4a8abd64da45"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.230088 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.298719 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovn-node-metrics-cert\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.298773 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-systemd-units\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.298806 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.298925 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-var-lib-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.298978 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299048 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-log-socket\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299071 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-netns\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299091 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-etc-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299114 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299144 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-systemd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299186 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-slash\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299204 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-env-overrides\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299227 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-ovn\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299262 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-kubelet\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299294 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qmp\" (UniqueName: \"kubernetes.io/projected/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-kube-api-access-97qmp\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299315 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-config\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299335 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-bin\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299353 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-script-lib\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299368 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-netd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299387 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-node-log\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299446 4847 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/86e5946b-870b-46f1-8923-4a8abd64da45-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299459 4847 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299469 4847 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/86e5946b-870b-46f1-8923-4a8abd64da45-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299478 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjwgx\" (UniqueName: \"kubernetes.io/projected/86e5946b-870b-46f1-8923-4a8abd64da45-kube-api-access-fjwgx\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299488 4847 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299497 4847 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.299507 4847 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/86e5946b-870b-46f1-8923-4a8abd64da45-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.400593 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qmp\" (UniqueName: \"kubernetes.io/projected/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-kube-api-access-97qmp\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.400855 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-config\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.400934 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-bin\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.400998 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-bin\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401098 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-script-lib\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401189 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-netd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401348 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-node-log\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401567 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovn-node-metrics-cert\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402127 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-systemd-units\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402203 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-systemd-units\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401758 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-config\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401889 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovnkube-script-lib\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401534 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-node-log\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.401315 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-cni-netd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402436 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402499 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402568 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-var-lib-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402667 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402728 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-ovn-kubernetes\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402596 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-var-lib-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402799 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-netns\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402873 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-run-netns\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402977 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-log-socket\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.402931 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-log-socket\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403107 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-etc-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403168 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-etc-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403236 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403305 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-systemd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403369 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-slash\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403430 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-env-overrides\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403492 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-ovn\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403563 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-kubelet\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-kubelet\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403273 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-openvswitch\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403845 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-host-slash\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403892 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-ovn\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.403924 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-run-systemd\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.404245 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-env-overrides\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.407367 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-ovn-node-metrics-cert\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.414142 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qmp\" (UniqueName: \"kubernetes.io/projected/8abd87a6-8319-4b4f-a797-a2acf6d2ad7a-kube-api-access-97qmp\") pod \"ovnkube-node-hrnp7\" (UID: \"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a\") " pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.546334 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:18 crc kubenswrapper[4847]: W0218 00:37:18.569104 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8abd87a6_8319_4b4f_a797_a2acf6d2ad7a.slice/crio-18cf52c7122e6d1ce36dc659c9bc98ddfa92b4c719ca3e81c772b1f77a9be506 WatchSource:0}: Error finding container 18cf52c7122e6d1ce36dc659c9bc98ddfa92b4c719ca3e81c772b1f77a9be506: Status 404 returned error can't find the container with id 18cf52c7122e6d1ce36dc659c9bc98ddfa92b4c719ca3e81c772b1f77a9be506 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.577401 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovnkube-controller/3.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.579690 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovn-acl-logging/0.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.580527 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bxm6w_86e5946b-870b-46f1-8923-4a8abd64da45/ovn-controller/0.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.580969 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.580992 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581000 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581009 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581015 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581021 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" exitCode=0 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581027 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" exitCode=143 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581034 4847 generic.go:334] "Generic (PLEG): container finished" podID="86e5946b-870b-46f1-8923-4a8abd64da45" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" exitCode=143 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581055 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581098 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581113 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581126 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581137 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581150 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581160 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581173 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581181 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581189 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581189 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581196 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581318 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581339 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581347 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581355 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581376 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581405 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581414 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581422 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581430 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581439 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581447 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581455 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581463 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581471 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581478 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581488 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581501 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581513 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581539 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581545 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581552 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581559 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581566 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581576 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581582 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581589 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581615 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" event={"ID":"86e5946b-870b-46f1-8923-4a8abd64da45","Type":"ContainerDied","Data":"ba197af23cd9f498052e30f3f139c9da9f3d2b0a16a84678546f2b24ddbbd5e8"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581629 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581638 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581644 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581650 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581656 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581663 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581669 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581676 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581682 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.581688 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.583259 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bxm6w" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.585732 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/2.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.587046 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/1.log" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.587092 4847 generic.go:334] "Generic (PLEG): container finished" podID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" containerID="61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb" exitCode=2 Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.587179 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerDied","Data":"61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.587207 4847 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.587712 4847 scope.go:117] "RemoveContainer" containerID="61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.587880 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wprf4_openshift-multus(f2eb9a65-88b5-49d1-885a-98c60c1283b4)\"" pod="openshift-multus/multus-wprf4" podUID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.595194 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"18cf52c7122e6d1ce36dc659c9bc98ddfa92b4c719ca3e81c772b1f77a9be506"} Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.602685 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.630131 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxm6w"] Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.633510 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bxm6w"] Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.638669 4847 scope.go:117] "RemoveContainer" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.672023 4847 scope.go:117] "RemoveContainer" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.691821 4847 scope.go:117] "RemoveContainer" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.707826 4847 scope.go:117] "RemoveContainer" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.724857 4847 scope.go:117] "RemoveContainer" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.777180 4847 scope.go:117] "RemoveContainer" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.792846 4847 scope.go:117] "RemoveContainer" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.813639 4847 scope.go:117] "RemoveContainer" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.827207 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.827522 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.827553 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} err="failed to get container status \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.827573 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.829742 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": container with ID starting with 9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67 not found: ID does not exist" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.829791 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} err="failed to get container status \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": rpc error: code = NotFound desc = could not find container \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": container with ID starting with 9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.829819 4847 scope.go:117] "RemoveContainer" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.830209 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": container with ID starting with 9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4 not found: ID does not exist" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.830242 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} err="failed to get container status \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": rpc error: code = NotFound desc = could not find container \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": container with ID starting with 9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.830259 4847 scope.go:117] "RemoveContainer" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.830556 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": container with ID starting with cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb not found: ID does not exist" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.830639 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} err="failed to get container status \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": rpc error: code = NotFound desc = could not find container \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": container with ID starting with cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.830680 4847 scope.go:117] "RemoveContainer" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.830998 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": container with ID starting with efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df not found: ID does not exist" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831039 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} err="failed to get container status \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": rpc error: code = NotFound desc = could not find container \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": container with ID starting with efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831060 4847 scope.go:117] "RemoveContainer" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.831278 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": container with ID starting with 5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c not found: ID does not exist" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831298 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} err="failed to get container status \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": rpc error: code = NotFound desc = could not find container \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": container with ID starting with 5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831312 4847 scope.go:117] "RemoveContainer" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.831845 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": container with ID starting with 9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c not found: ID does not exist" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831867 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} err="failed to get container status \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": rpc error: code = NotFound desc = could not find container \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": container with ID starting with 9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.831880 4847 scope.go:117] "RemoveContainer" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.832134 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": container with ID starting with d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485 not found: ID does not exist" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832154 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} err="failed to get container status \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": rpc error: code = NotFound desc = could not find container \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": container with ID starting with d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832167 4847 scope.go:117] "RemoveContainer" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.832370 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": container with ID starting with 3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80 not found: ID does not exist" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832390 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} err="failed to get container status \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": rpc error: code = NotFound desc = could not find container \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": container with ID starting with 3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832403 4847 scope.go:117] "RemoveContainer" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: E0218 00:37:18.832617 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": container with ID starting with 03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9 not found: ID does not exist" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832639 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} err="failed to get container status \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": rpc error: code = NotFound desc = could not find container \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": container with ID starting with 03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.832658 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833244 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} err="failed to get container status \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833265 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833729 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} err="failed to get container status \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": rpc error: code = NotFound desc = could not find container \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": container with ID starting with 9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833747 4847 scope.go:117] "RemoveContainer" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833975 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} err="failed to get container status \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": rpc error: code = NotFound desc = could not find container \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": container with ID starting with 9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.833995 4847 scope.go:117] "RemoveContainer" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834264 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} err="failed to get container status \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": rpc error: code = NotFound desc = could not find container \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": container with ID starting with cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834287 4847 scope.go:117] "RemoveContainer" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834488 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} err="failed to get container status \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": rpc error: code = NotFound desc = could not find container \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": container with ID starting with efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834502 4847 scope.go:117] "RemoveContainer" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834726 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} err="failed to get container status \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": rpc error: code = NotFound desc = could not find container \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": container with ID starting with 5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834745 4847 scope.go:117] "RemoveContainer" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834917 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} err="failed to get container status \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": rpc error: code = NotFound desc = could not find container \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": container with ID starting with 9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.834932 4847 scope.go:117] "RemoveContainer" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835088 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} err="failed to get container status \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": rpc error: code = NotFound desc = could not find container \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": container with ID starting with d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835102 4847 scope.go:117] "RemoveContainer" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835264 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} err="failed to get container status \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": rpc error: code = NotFound desc = could not find container \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": container with ID starting with 3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835278 4847 scope.go:117] "RemoveContainer" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835429 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} err="failed to get container status \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": rpc error: code = NotFound desc = could not find container \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": container with ID starting with 03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835443 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835610 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} err="failed to get container status \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835639 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835831 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} err="failed to get container status \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": rpc error: code = NotFound desc = could not find container \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": container with ID starting with 9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.835845 4847 scope.go:117] "RemoveContainer" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836087 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} err="failed to get container status \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": rpc error: code = NotFound desc = could not find container \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": container with ID starting with 9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836126 4847 scope.go:117] "RemoveContainer" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836329 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} err="failed to get container status \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": rpc error: code = NotFound desc = could not find container \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": container with ID starting with cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836349 4847 scope.go:117] "RemoveContainer" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836488 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} err="failed to get container status \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": rpc error: code = NotFound desc = could not find container \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": container with ID starting with efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836500 4847 scope.go:117] "RemoveContainer" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836639 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} err="failed to get container status \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": rpc error: code = NotFound desc = could not find container \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": container with ID starting with 5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836656 4847 scope.go:117] "RemoveContainer" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836836 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} err="failed to get container status \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": rpc error: code = NotFound desc = could not find container \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": container with ID starting with 9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.836852 4847 scope.go:117] "RemoveContainer" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837090 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} err="failed to get container status \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": rpc error: code = NotFound desc = could not find container \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": container with ID starting with d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837106 4847 scope.go:117] "RemoveContainer" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837278 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} err="failed to get container status \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": rpc error: code = NotFound desc = could not find container \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": container with ID starting with 3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837622 4847 scope.go:117] "RemoveContainer" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837868 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} err="failed to get container status \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": rpc error: code = NotFound desc = could not find container \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": container with ID starting with 03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.837909 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838103 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} err="failed to get container status \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838118 4847 scope.go:117] "RemoveContainer" containerID="9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838298 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67"} err="failed to get container status \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": rpc error: code = NotFound desc = could not find container \"9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67\": container with ID starting with 9c9fac5a42ab14e595ded6f53026a6c8d20fb1ea508c638ed66faa75f759da67 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838311 4847 scope.go:117] "RemoveContainer" containerID="9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838474 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4"} err="failed to get container status \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": rpc error: code = NotFound desc = could not find container \"9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4\": container with ID starting with 9104df0ba3fb1e66ed95418efbae872f3f5d059ae11fecdba8697d33a7cbc1d4 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838488 4847 scope.go:117] "RemoveContainer" containerID="cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838658 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb"} err="failed to get container status \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": rpc error: code = NotFound desc = could not find container \"cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb\": container with ID starting with cc16bb90105d1857b1f33a084a1acd30528671fedf2bd15c30bb9a458d3c88fb not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838672 4847 scope.go:117] "RemoveContainer" containerID="efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838823 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df"} err="failed to get container status \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": rpc error: code = NotFound desc = could not find container \"efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df\": container with ID starting with efc91357087c0df14b7f17a78032a4c1aca70843a234718d4fe02ae770b886df not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838838 4847 scope.go:117] "RemoveContainer" containerID="5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.838991 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c"} err="failed to get container status \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": rpc error: code = NotFound desc = could not find container \"5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c\": container with ID starting with 5da560bf4f8d232a7da8ea0e74189b3c8dc2164813d33d7c15b00eff91da7d7c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.839004 4847 scope.go:117] "RemoveContainer" containerID="9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.839155 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c"} err="failed to get container status \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": rpc error: code = NotFound desc = could not find container \"9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c\": container with ID starting with 9bb8e01acf2828518708bddcb87daea9a142d50a70addacc45153931867a679c not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.839168 4847 scope.go:117] "RemoveContainer" containerID="d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.839328 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485"} err="failed to get container status \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": rpc error: code = NotFound desc = could not find container \"d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485\": container with ID starting with d3a303e7693a9c34cdd82891aca7664d7dd787f75b7579a7ca441d22bbd2e485 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.840388 4847 scope.go:117] "RemoveContainer" containerID="3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.840724 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80"} err="failed to get container status \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": rpc error: code = NotFound desc = could not find container \"3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80\": container with ID starting with 3eb0906a35c57f6adf94d062e2095a67cbf49b775365d9d5d1251a4763d1bb80 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.840766 4847 scope.go:117] "RemoveContainer" containerID="03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.841341 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9"} err="failed to get container status \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": rpc error: code = NotFound desc = could not find container \"03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9\": container with ID starting with 03aa4131b03e391b55bbb43c5a4d3c207a005ac89963696f13d70bc76b0b82a9 not found: ID does not exist" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.841409 4847 scope.go:117] "RemoveContainer" containerID="fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd" Feb 18 00:37:18 crc kubenswrapper[4847]: I0218 00:37:18.841892 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd"} err="failed to get container status \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": rpc error: code = NotFound desc = could not find container \"fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd\": container with ID starting with fd583b2a9aaa5b577ecdf2c3117247f19a164abd1280b75f4717f11bcc9a6abd not found: ID does not exist" Feb 18 00:37:19 crc kubenswrapper[4847]: I0218 00:37:19.410474 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e5946b-870b-46f1-8923-4a8abd64da45" path="/var/lib/kubelet/pods/86e5946b-870b-46f1-8923-4a8abd64da45/volumes" Feb 18 00:37:19 crc kubenswrapper[4847]: I0218 00:37:19.602582 4847 generic.go:334] "Generic (PLEG): container finished" podID="8abd87a6-8319-4b4f-a797-a2acf6d2ad7a" containerID="4d5e753afe4c4192a553174b7a2da3ee14493e03d471cc96a2c2e9286e2ac015" exitCode=0 Feb 18 00:37:19 crc kubenswrapper[4847]: I0218 00:37:19.602644 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerDied","Data":"4d5e753afe4c4192a553174b7a2da3ee14493e03d471cc96a2c2e9286e2ac015"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614069 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"57fe9b74ea4781352b286d7008b1240da2cfcd6e74904d7992b276b587a5056a"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614448 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"756e8dc1b38b00a1f26d009945903b11ae8e5e92c225c64f8e3732bef04dd59e"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614465 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"8071d280cbc48ddb551e1804f415bc4c7f2d9e643644af2bf974191f770d5279"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614479 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"fd21dfac254b936843959be576e8fb5076868ca0ad986a119f18164cc11a2569"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614490 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"47325c22dc2acdbead81e3e19d948f49aef2957dd643b87cb3ed904006fc5b66"} Feb 18 00:37:20 crc kubenswrapper[4847]: I0218 00:37:20.614501 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"cf3f505624d1be642e0cd0fc62dbf48671278b3b772a673d2e849dcab86ac9db"} Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.940124 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp"] Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.941941 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.944092 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.944089 4847 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hf9mv" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.945709 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.959984 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7tjhn"] Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.960946 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.969344 4847 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fgdzw" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.980771 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wrhbw"] Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.981726 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:21 crc kubenswrapper[4847]: I0218 00:37:21.983273 4847 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-k28kj" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.050411 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt84d\" (UniqueName: \"kubernetes.io/projected/7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd-kube-api-access-zt84d\") pod \"cert-manager-858654f9db-7tjhn\" (UID: \"7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd\") " pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.050460 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74f4r\" (UniqueName: \"kubernetes.io/projected/3280aa1e-4dd8-438a-81c4-a07a1b7080db-kube-api-access-74f4r\") pod \"cert-manager-cainjector-cf98fcc89-7gsvp\" (UID: \"3280aa1e-4dd8-438a-81c4-a07a1b7080db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.050504 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvnc\" (UniqueName: \"kubernetes.io/projected/7d40c331-d27a-4d9f-910d-3c11700f264b-kube-api-access-6fvnc\") pod \"cert-manager-webhook-687f57d79b-wrhbw\" (UID: \"7d40c331-d27a-4d9f-910d-3c11700f264b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.152335 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt84d\" (UniqueName: \"kubernetes.io/projected/7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd-kube-api-access-zt84d\") pod \"cert-manager-858654f9db-7tjhn\" (UID: \"7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd\") " pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.152394 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74f4r\" (UniqueName: \"kubernetes.io/projected/3280aa1e-4dd8-438a-81c4-a07a1b7080db-kube-api-access-74f4r\") pod \"cert-manager-cainjector-cf98fcc89-7gsvp\" (UID: \"3280aa1e-4dd8-438a-81c4-a07a1b7080db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.152446 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fvnc\" (UniqueName: \"kubernetes.io/projected/7d40c331-d27a-4d9f-910d-3c11700f264b-kube-api-access-6fvnc\") pod \"cert-manager-webhook-687f57d79b-wrhbw\" (UID: \"7d40c331-d27a-4d9f-910d-3c11700f264b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.183881 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fvnc\" (UniqueName: \"kubernetes.io/projected/7d40c331-d27a-4d9f-910d-3c11700f264b-kube-api-access-6fvnc\") pod \"cert-manager-webhook-687f57d79b-wrhbw\" (UID: \"7d40c331-d27a-4d9f-910d-3c11700f264b\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.184536 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt84d\" (UniqueName: \"kubernetes.io/projected/7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd-kube-api-access-zt84d\") pod \"cert-manager-858654f9db-7tjhn\" (UID: \"7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd\") " pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.192103 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74f4r\" (UniqueName: \"kubernetes.io/projected/3280aa1e-4dd8-438a-81c4-a07a1b7080db-kube-api-access-74f4r\") pod \"cert-manager-cainjector-cf98fcc89-7gsvp\" (UID: \"3280aa1e-4dd8-438a-81c4-a07a1b7080db\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.254849 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.272935 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.286031 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(ce9e0270875457872bea552188a4aeec0a22006e032f51ed0a18a3f303882cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.286108 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(ce9e0270875457872bea552188a4aeec0a22006e032f51ed0a18a3f303882cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.286131 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(ce9e0270875457872bea552188a4aeec0a22006e032f51ed0a18a3f303882cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.286169 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(ce9e0270875457872bea552188a4aeec0a22006e032f51ed0a18a3f303882cc5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" podUID="3280aa1e-4dd8-438a-81c4-a07a1b7080db" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.297999 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.316808 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(5a12b6f6c693fe63260cb6e5b33afd966340e8d48bbe597513528004f23cb086): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.316888 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(5a12b6f6c693fe63260cb6e5b33afd966340e8d48bbe597513528004f23cb086): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.316916 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(5a12b6f6c693fe63260cb6e5b33afd966340e8d48bbe597513528004f23cb086): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.316964 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(5a12b6f6c693fe63260cb6e5b33afd966340e8d48bbe597513528004f23cb086): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-7tjhn" podUID="7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.321905 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jx28n" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.342234 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(18880f6aeb18e319ef7124d7ae74636572b6214ee40e4153af58ca50afa85a9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.342299 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(18880f6aeb18e319ef7124d7ae74636572b6214ee40e4153af58ca50afa85a9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.342328 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(18880f6aeb18e319ef7124d7ae74636572b6214ee40e4153af58ca50afa85a9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:22 crc kubenswrapper[4847]: E0218 00:37:22.342376 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(18880f6aeb18e319ef7124d7ae74636572b6214ee40e4153af58ca50afa85a9c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" podUID="7d40c331-d27a-4d9f-910d-3c11700f264b" Feb 18 00:37:22 crc kubenswrapper[4847]: I0218 00:37:22.628542 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"989ab7de11fa7d5e9bf3164aff4f745cb865722a356d262cdcf2efa42b8cbf9e"} Feb 18 00:37:23 crc kubenswrapper[4847]: I0218 00:37:23.491486 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:37:23 crc kubenswrapper[4847]: I0218 00:37:23.492205 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:37:25 crc kubenswrapper[4847]: I0218 00:37:25.653768 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" event={"ID":"8abd87a6-8319-4b4f-a797-a2acf6d2ad7a","Type":"ContainerStarted","Data":"6dc1f9d2eeb935303f3b6b93e1957905badcb4d4af96108364c2c6c11919d6a6"} Feb 18 00:37:25 crc kubenswrapper[4847]: I0218 00:37:25.654036 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:25 crc kubenswrapper[4847]: I0218 00:37:25.683902 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" podStartSLOduration=7.683886243 podStartE2EDuration="7.683886243s" podCreationTimestamp="2026-02-18 00:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:37:25.681161534 +0000 UTC m=+719.058512476" watchObservedRunningTime="2026-02-18 00:37:25.683886243 +0000 UTC m=+719.061237175" Feb 18 00:37:25 crc kubenswrapper[4847]: I0218 00:37:25.692341 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.097224 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7tjhn"] Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.097327 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.097729 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.134665 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wrhbw"] Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.134802 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.135230 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.142788 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(87839c79821891633287d3ced171e6c5fd74f96037d375428fb4056a81e906df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.142855 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(87839c79821891633287d3ced171e6c5fd74f96037d375428fb4056a81e906df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.142887 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(87839c79821891633287d3ced171e6c5fd74f96037d375428fb4056a81e906df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.142928 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(87839c79821891633287d3ced171e6c5fd74f96037d375428fb4056a81e906df): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-7tjhn" podUID="7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.165356 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp"] Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.165484 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.165866 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.206649 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(cb6b2e961a8d49379bbd0aaf4214d78fddba9758bcf3d81cd2df6b23cdbb7a41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.206718 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(cb6b2e961a8d49379bbd0aaf4214d78fddba9758bcf3d81cd2df6b23cdbb7a41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.206742 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(cb6b2e961a8d49379bbd0aaf4214d78fddba9758bcf3d81cd2df6b23cdbb7a41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.206788 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(cb6b2e961a8d49379bbd0aaf4214d78fddba9758bcf3d81cd2df6b23cdbb7a41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" podUID="3280aa1e-4dd8-438a-81c4-a07a1b7080db" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.215611 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(3ffddf651a67e00dc8d17e800e8d60981a82df6623151c09431d6a5b60f914e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.215690 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(3ffddf651a67e00dc8d17e800e8d60981a82df6623151c09431d6a5b60f914e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.215713 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(3ffddf651a67e00dc8d17e800e8d60981a82df6623151c09431d6a5b60f914e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:26 crc kubenswrapper[4847]: E0218 00:37:26.215758 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(3ffddf651a67e00dc8d17e800e8d60981a82df6623151c09431d6a5b60f914e0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" podUID="7d40c331-d27a-4d9f-910d-3c11700f264b" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.658495 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.658539 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:26 crc kubenswrapper[4847]: I0218 00:37:26.683502 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:27 crc kubenswrapper[4847]: I0218 00:37:27.905041 4847 scope.go:117] "RemoveContainer" containerID="f14a2601bed78c7ba00c461098095c844732f2680236e3fe53ad2a8683126482" Feb 18 00:37:28 crc kubenswrapper[4847]: I0218 00:37:28.669365 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/2.log" Feb 18 00:37:32 crc kubenswrapper[4847]: I0218 00:37:32.404913 4847 scope.go:117] "RemoveContainer" containerID="61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb" Feb 18 00:37:32 crc kubenswrapper[4847]: E0218 00:37:32.405799 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wprf4_openshift-multus(f2eb9a65-88b5-49d1-885a-98c60c1283b4)\"" pod="openshift-multus/multus-wprf4" podUID="f2eb9a65-88b5-49d1-885a-98c60c1283b4" Feb 18 00:37:39 crc kubenswrapper[4847]: I0218 00:37:39.403851 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:39 crc kubenswrapper[4847]: I0218 00:37:39.404932 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:39 crc kubenswrapper[4847]: E0218 00:37:39.445931 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(96f643901120c1bae95bc13c175a4fc2cda23714e1c3f27b3759ed0dbd79beed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:39 crc kubenswrapper[4847]: E0218 00:37:39.446370 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(96f643901120c1bae95bc13c175a4fc2cda23714e1c3f27b3759ed0dbd79beed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:39 crc kubenswrapper[4847]: E0218 00:37:39.446408 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(96f643901120c1bae95bc13c175a4fc2cda23714e1c3f27b3759ed0dbd79beed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:39 crc kubenswrapper[4847]: E0218 00:37:39.446486 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-wrhbw_cert-manager(7d40c331-d27a-4d9f-910d-3c11700f264b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-wrhbw_cert-manager_7d40c331-d27a-4d9f-910d-3c11700f264b_0(96f643901120c1bae95bc13c175a4fc2cda23714e1c3f27b3759ed0dbd79beed): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" podUID="7d40c331-d27a-4d9f-910d-3c11700f264b" Feb 18 00:37:40 crc kubenswrapper[4847]: I0218 00:37:40.404167 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:40 crc kubenswrapper[4847]: I0218 00:37:40.405038 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:40 crc kubenswrapper[4847]: E0218 00:37:40.452884 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(5082bab9fe5d918853b8e0e745ddbcda89b654e84a6e74a6da3c4280f80bcd68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:40 crc kubenswrapper[4847]: E0218 00:37:40.452972 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(5082bab9fe5d918853b8e0e745ddbcda89b654e84a6e74a6da3c4280f80bcd68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:40 crc kubenswrapper[4847]: E0218 00:37:40.452996 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(5082bab9fe5d918853b8e0e745ddbcda89b654e84a6e74a6da3c4280f80bcd68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:40 crc kubenswrapper[4847]: E0218 00:37:40.453042 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager(3280aa1e-4dd8-438a-81c4-a07a1b7080db)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-7gsvp_cert-manager_3280aa1e-4dd8-438a-81c4-a07a1b7080db_0(5082bab9fe5d918853b8e0e745ddbcda89b654e84a6e74a6da3c4280f80bcd68): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" podUID="3280aa1e-4dd8-438a-81c4-a07a1b7080db" Feb 18 00:37:41 crc kubenswrapper[4847]: I0218 00:37:41.403596 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:41 crc kubenswrapper[4847]: I0218 00:37:41.404356 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:41 crc kubenswrapper[4847]: E0218 00:37:41.443336 4847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(44c113b396848ebe2eca798c5bf948066e9b5fb40ff1c07950491575cf0888e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 18 00:37:41 crc kubenswrapper[4847]: E0218 00:37:41.443453 4847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(44c113b396848ebe2eca798c5bf948066e9b5fb40ff1c07950491575cf0888e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:41 crc kubenswrapper[4847]: E0218 00:37:41.443492 4847 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(44c113b396848ebe2eca798c5bf948066e9b5fb40ff1c07950491575cf0888e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:41 crc kubenswrapper[4847]: E0218 00:37:41.443570 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-7tjhn_cert-manager(7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-7tjhn_cert-manager_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd_0(44c113b396848ebe2eca798c5bf948066e9b5fb40ff1c07950491575cf0888e1): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-7tjhn" podUID="7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd" Feb 18 00:37:45 crc kubenswrapper[4847]: I0218 00:37:45.406081 4847 scope.go:117] "RemoveContainer" containerID="61abcb29f8d8794e0642cb97e22d8e306abd9620e04c0396bce879675cbff4fb" Feb 18 00:37:46 crc kubenswrapper[4847]: I0218 00:37:46.815054 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wprf4_f2eb9a65-88b5-49d1-885a-98c60c1283b4/kube-multus/2.log" Feb 18 00:37:46 crc kubenswrapper[4847]: I0218 00:37:46.815702 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wprf4" event={"ID":"f2eb9a65-88b5-49d1-885a-98c60c1283b4","Type":"ContainerStarted","Data":"fbc63e0f37ff55c7f26a8056cf8812c7b5556fa3cedc9c978cfb09cc494dccba"} Feb 18 00:37:48 crc kubenswrapper[4847]: I0218 00:37:48.584722 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hrnp7" Feb 18 00:37:51 crc kubenswrapper[4847]: I0218 00:37:51.403811 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:51 crc kubenswrapper[4847]: I0218 00:37:51.404938 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:51 crc kubenswrapper[4847]: I0218 00:37:51.863150 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wrhbw"] Feb 18 00:37:51 crc kubenswrapper[4847]: W0218 00:37:51.871061 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d40c331_d27a_4d9f_910d_3c11700f264b.slice/crio-d46c773c13028a4b96b29e119d7902d0cb64c5458ba9d5470df385553dbad56c WatchSource:0}: Error finding container d46c773c13028a4b96b29e119d7902d0cb64c5458ba9d5470df385553dbad56c: Status 404 returned error can't find the container with id d46c773c13028a4b96b29e119d7902d0cb64c5458ba9d5470df385553dbad56c Feb 18 00:37:52 crc kubenswrapper[4847]: I0218 00:37:52.404212 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:52 crc kubenswrapper[4847]: I0218 00:37:52.405187 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" Feb 18 00:37:52 crc kubenswrapper[4847]: I0218 00:37:52.865445 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" event={"ID":"7d40c331-d27a-4d9f-910d-3c11700f264b","Type":"ContainerStarted","Data":"d46c773c13028a4b96b29e119d7902d0cb64c5458ba9d5470df385553dbad56c"} Feb 18 00:37:52 crc kubenswrapper[4847]: I0218 00:37:52.889710 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp"] Feb 18 00:37:52 crc kubenswrapper[4847]: W0218 00:37:52.905851 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3280aa1e_4dd8_438a_81c4_a07a1b7080db.slice/crio-cbecbe42bc0c97b5eed6c3f682effe8c48d4a94d8133c684cdc6c9a6383b58d9 WatchSource:0}: Error finding container cbecbe42bc0c97b5eed6c3f682effe8c48d4a94d8133c684cdc6c9a6383b58d9: Status 404 returned error can't find the container with id cbecbe42bc0c97b5eed6c3f682effe8c48d4a94d8133c684cdc6c9a6383b58d9 Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.404487 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.405402 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7tjhn" Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.491475 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.491549 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.697445 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7tjhn"] Feb 18 00:37:53 crc kubenswrapper[4847]: W0218 00:37:53.704353 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c1b21d7_11d3_4f97_aee8_d17dbeec7dbd.slice/crio-4795ef515172acb5920494a2eb410dbc9a73408388804c777b1904a126bd9b9d WatchSource:0}: Error finding container 4795ef515172acb5920494a2eb410dbc9a73408388804c777b1904a126bd9b9d: Status 404 returned error can't find the container with id 4795ef515172acb5920494a2eb410dbc9a73408388804c777b1904a126bd9b9d Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.874208 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7tjhn" event={"ID":"7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd","Type":"ContainerStarted","Data":"4795ef515172acb5920494a2eb410dbc9a73408388804c777b1904a126bd9b9d"} Feb 18 00:37:53 crc kubenswrapper[4847]: I0218 00:37:53.875503 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" event={"ID":"3280aa1e-4dd8-438a-81c4-a07a1b7080db","Type":"ContainerStarted","Data":"cbecbe42bc0c97b5eed6c3f682effe8c48d4a94d8133c684cdc6c9a6383b58d9"} Feb 18 00:37:55 crc kubenswrapper[4847]: I0218 00:37:55.889809 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" event={"ID":"7d40c331-d27a-4d9f-910d-3c11700f264b","Type":"ContainerStarted","Data":"10908efe357b12c69909245ba0c31594ef9df948851c031406ed3dabdb63203b"} Feb 18 00:37:55 crc kubenswrapper[4847]: I0218 00:37:55.890236 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:37:55 crc kubenswrapper[4847]: I0218 00:37:55.909359 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" podStartSLOduration=31.110643845 podStartE2EDuration="34.909323334s" podCreationTimestamp="2026-02-18 00:37:21 +0000 UTC" firstStartedPulling="2026-02-18 00:37:51.87884877 +0000 UTC m=+745.256199742" lastFinishedPulling="2026-02-18 00:37:55.677528289 +0000 UTC m=+749.054879231" observedRunningTime="2026-02-18 00:37:55.906756988 +0000 UTC m=+749.284107940" watchObservedRunningTime="2026-02-18 00:37:55.909323334 +0000 UTC m=+749.286674276" Feb 18 00:37:57 crc kubenswrapper[4847]: I0218 00:37:57.904882 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" event={"ID":"3280aa1e-4dd8-438a-81c4-a07a1b7080db","Type":"ContainerStarted","Data":"0b9b2819155b5a51c9cda80e931a088cdd86a0645f7a43e0f053ec7c6a076a91"} Feb 18 00:37:57 crc kubenswrapper[4847]: I0218 00:37:57.908496 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7tjhn" event={"ID":"7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd","Type":"ContainerStarted","Data":"27f33c0946d206cbac70717b8e523e88a6cb6720c9a6b14fcb2ac536dc02298f"} Feb 18 00:37:57 crc kubenswrapper[4847]: I0218 00:37:57.926860 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7gsvp" podStartSLOduration=32.870967158 podStartE2EDuration="36.926829663s" podCreationTimestamp="2026-02-18 00:37:21 +0000 UTC" firstStartedPulling="2026-02-18 00:37:52.907924451 +0000 UTC m=+746.285275433" lastFinishedPulling="2026-02-18 00:37:56.963786996 +0000 UTC m=+750.341137938" observedRunningTime="2026-02-18 00:37:57.922839181 +0000 UTC m=+751.300190153" watchObservedRunningTime="2026-02-18 00:37:57.926829663 +0000 UTC m=+751.304180635" Feb 18 00:37:57 crc kubenswrapper[4847]: I0218 00:37:57.945967 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7tjhn" podStartSLOduration=33.624544895 podStartE2EDuration="36.945942833s" podCreationTimestamp="2026-02-18 00:37:21 +0000 UTC" firstStartedPulling="2026-02-18 00:37:53.706891111 +0000 UTC m=+747.084242063" lastFinishedPulling="2026-02-18 00:37:57.028289049 +0000 UTC m=+750.405640001" observedRunningTime="2026-02-18 00:37:57.941047468 +0000 UTC m=+751.318398440" watchObservedRunningTime="2026-02-18 00:37:57.945942833 +0000 UTC m=+751.323293805" Feb 18 00:38:01 crc kubenswrapper[4847]: I0218 00:38:01.578378 4847 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:38:02 crc kubenswrapper[4847]: I0218 00:38:02.302015 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wrhbw" Feb 18 00:38:14 crc kubenswrapper[4847]: I0218 00:38:14.910273 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:14 crc kubenswrapper[4847]: I0218 00:38:14.913465 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:14 crc kubenswrapper[4847]: I0218 00:38:14.937722 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.062976 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.063042 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpc6w\" (UniqueName: \"kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.063109 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.164477 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.164550 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpc6w\" (UniqueName: \"kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.164643 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.165211 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.165248 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.193767 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpc6w\" (UniqueName: \"kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w\") pod \"redhat-operators-6zsk5\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.242831 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:15 crc kubenswrapper[4847]: I0218 00:38:15.530723 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:16 crc kubenswrapper[4847]: I0218 00:38:16.047375 4847 generic.go:334] "Generic (PLEG): container finished" podID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerID="adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9" exitCode=0 Feb 18 00:38:16 crc kubenswrapper[4847]: I0218 00:38:16.047445 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerDied","Data":"adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9"} Feb 18 00:38:16 crc kubenswrapper[4847]: I0218 00:38:16.047488 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerStarted","Data":"606bc2e1a4ee8ed936478345e9b6e7baf2a26a1092e16c612982f2859673eb6a"} Feb 18 00:38:17 crc kubenswrapper[4847]: I0218 00:38:17.058415 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerStarted","Data":"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541"} Feb 18 00:38:18 crc kubenswrapper[4847]: I0218 00:38:18.069005 4847 generic.go:334] "Generic (PLEG): container finished" podID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerID="60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541" exitCode=0 Feb 18 00:38:18 crc kubenswrapper[4847]: I0218 00:38:18.069176 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerDied","Data":"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541"} Feb 18 00:38:19 crc kubenswrapper[4847]: I0218 00:38:19.083282 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerStarted","Data":"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0"} Feb 18 00:38:19 crc kubenswrapper[4847]: I0218 00:38:19.116294 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6zsk5" podStartSLOduration=2.663664839 podStartE2EDuration="5.116273467s" podCreationTimestamp="2026-02-18 00:38:14 +0000 UTC" firstStartedPulling="2026-02-18 00:38:16.048925784 +0000 UTC m=+769.426276736" lastFinishedPulling="2026-02-18 00:38:18.501534412 +0000 UTC m=+771.878885364" observedRunningTime="2026-02-18 00:38:19.112848349 +0000 UTC m=+772.490199311" watchObservedRunningTime="2026-02-18 00:38:19.116273467 +0000 UTC m=+772.493624429" Feb 18 00:38:23 crc kubenswrapper[4847]: I0218 00:38:23.491583 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:38:23 crc kubenswrapper[4847]: I0218 00:38:23.492161 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:38:23 crc kubenswrapper[4847]: I0218 00:38:23.492249 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:38:23 crc kubenswrapper[4847]: I0218 00:38:23.493443 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:38:23 crc kubenswrapper[4847]: I0218 00:38:23.493591 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d" gracePeriod=600 Feb 18 00:38:24 crc kubenswrapper[4847]: I0218 00:38:24.131960 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d" exitCode=0 Feb 18 00:38:24 crc kubenswrapper[4847]: I0218 00:38:24.132095 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d"} Feb 18 00:38:24 crc kubenswrapper[4847]: I0218 00:38:24.132378 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa"} Feb 18 00:38:24 crc kubenswrapper[4847]: I0218 00:38:24.132406 4847 scope.go:117] "RemoveContainer" containerID="44d600d2b749459f03a3c1cdd67507236e73f363dd766a116429b214e5f46a17" Feb 18 00:38:25 crc kubenswrapper[4847]: I0218 00:38:25.244001 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:25 crc kubenswrapper[4847]: I0218 00:38:25.245784 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.306904 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6zsk5" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="registry-server" probeResult="failure" output=< Feb 18 00:38:26 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:38:26 crc kubenswrapper[4847]: > Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.821299 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk"] Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.823133 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.827265 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.834783 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk"] Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.929703 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2bfv\" (UniqueName: \"kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.929775 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:26 crc kubenswrapper[4847]: I0218 00:38:26.929816 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.030677 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2bfv\" (UniqueName: \"kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.030729 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.030755 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.031422 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.031945 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.063672 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2bfv\" (UniqueName: \"kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.152129 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.227616 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp"] Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.228674 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.242652 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp"] Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.338255 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.338342 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlptp\" (UniqueName: \"kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.338389 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.438928 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlptp\" (UniqueName: \"kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.439221 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.439250 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.440149 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.440372 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.456383 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlptp\" (UniqueName: \"kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.545876 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:27 crc kubenswrapper[4847]: I0218 00:38:27.670577 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk"] Feb 18 00:38:27 crc kubenswrapper[4847]: W0218 00:38:27.714046 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0378fa3_c8b4_43a3_bf6e_14a9066f1fcb.slice/crio-6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630 WatchSource:0}: Error finding container 6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630: Status 404 returned error can't find the container with id 6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630 Feb 18 00:38:28 crc kubenswrapper[4847]: I0218 00:38:28.010031 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp"] Feb 18 00:38:28 crc kubenswrapper[4847]: W0218 00:38:28.021253 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261e46ac_b43f_490f_bdbe_8181cbecdf0d.slice/crio-c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a WatchSource:0}: Error finding container c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a: Status 404 returned error can't find the container with id c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a Feb 18 00:38:28 crc kubenswrapper[4847]: I0218 00:38:28.162233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerStarted","Data":"c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a"} Feb 18 00:38:28 crc kubenswrapper[4847]: I0218 00:38:28.164332 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerStarted","Data":"3b5da211956cf883c13a5a3c4affdf85d2fc09077e6a2bd52dcccdf48234b294"} Feb 18 00:38:28 crc kubenswrapper[4847]: I0218 00:38:28.164387 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerStarted","Data":"6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630"} Feb 18 00:38:29 crc kubenswrapper[4847]: I0218 00:38:29.186713 4847 generic.go:334] "Generic (PLEG): container finished" podID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerID="3b5da211956cf883c13a5a3c4affdf85d2fc09077e6a2bd52dcccdf48234b294" exitCode=0 Feb 18 00:38:29 crc kubenswrapper[4847]: I0218 00:38:29.186988 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerDied","Data":"3b5da211956cf883c13a5a3c4affdf85d2fc09077e6a2bd52dcccdf48234b294"} Feb 18 00:38:29 crc kubenswrapper[4847]: I0218 00:38:29.189653 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerStarted","Data":"4448f19ac9e6999d5f0425a479aee6f1332647b399b5ebb4affa3b2c47cc193c"} Feb 18 00:38:30 crc kubenswrapper[4847]: I0218 00:38:30.199403 4847 generic.go:334] "Generic (PLEG): container finished" podID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerID="4448f19ac9e6999d5f0425a479aee6f1332647b399b5ebb4affa3b2c47cc193c" exitCode=0 Feb 18 00:38:30 crc kubenswrapper[4847]: I0218 00:38:30.199512 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerDied","Data":"4448f19ac9e6999d5f0425a479aee6f1332647b399b5ebb4affa3b2c47cc193c"} Feb 18 00:38:31 crc kubenswrapper[4847]: I0218 00:38:31.211482 4847 generic.go:334] "Generic (PLEG): container finished" podID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerID="01171e800d3cb539b2116ac9b5246104dc2030f2f92007c0642e07785c0a9b76" exitCode=0 Feb 18 00:38:31 crc kubenswrapper[4847]: I0218 00:38:31.211571 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerDied","Data":"01171e800d3cb539b2116ac9b5246104dc2030f2f92007c0642e07785c0a9b76"} Feb 18 00:38:32 crc kubenswrapper[4847]: I0218 00:38:32.239901 4847 generic.go:334] "Generic (PLEG): container finished" podID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerID="e4d8c6045a7451bdca4a44fdd28484c5e15e208156389d8b73cef88fad399f36" exitCode=0 Feb 18 00:38:32 crc kubenswrapper[4847]: I0218 00:38:32.239978 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerDied","Data":"e4d8c6045a7451bdca4a44fdd28484c5e15e208156389d8b73cef88fad399f36"} Feb 18 00:38:32 crc kubenswrapper[4847]: I0218 00:38:32.246228 4847 generic.go:334] "Generic (PLEG): container finished" podID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerID="f0e3c721375629c6d04fb9a3d28a9ca241954b617f3e1f526b4d281f927d2846" exitCode=0 Feb 18 00:38:32 crc kubenswrapper[4847]: I0218 00:38:32.246274 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerDied","Data":"f0e3c721375629c6d04fb9a3d28a9ca241954b617f3e1f526b4d281f927d2846"} Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.258242 4847 generic.go:334] "Generic (PLEG): container finished" podID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerID="c42dd7f3528b2d8d2182b79032c7b2d77daf238e6f93985a5fbbd57ba76448cb" exitCode=0 Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.258376 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerDied","Data":"c42dd7f3528b2d8d2182b79032c7b2d77daf238e6f93985a5fbbd57ba76448cb"} Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.547292 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.726695 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util\") pod \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.726859 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle\") pod \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.726925 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2bfv\" (UniqueName: \"kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv\") pod \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\" (UID: \"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb\") " Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.728930 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle" (OuterVolumeSpecName: "bundle") pod "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" (UID: "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.737003 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv" (OuterVolumeSpecName: "kube-api-access-d2bfv") pod "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" (UID: "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb"). InnerVolumeSpecName "kube-api-access-d2bfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.830440 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:33 crc kubenswrapper[4847]: I0218 00:38:33.830489 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2bfv\" (UniqueName: \"kubernetes.io/projected/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-kube-api-access-d2bfv\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.269972 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.270419 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk" event={"ID":"f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb","Type":"ContainerDied","Data":"6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630"} Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.270456 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3940dc6ef86011952d71e8e9ef5aeda2af1550fef1be12b9684b8b3f6b7630" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.275956 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util" (OuterVolumeSpecName: "util") pod "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" (UID: "f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.337257 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.654837 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.847274 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util\") pod \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.847357 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle\") pod \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.847392 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlptp\" (UniqueName: \"kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp\") pod \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\" (UID: \"261e46ac-b43f-490f-bdbe-8181cbecdf0d\") " Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.848320 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle" (OuterVolumeSpecName: "bundle") pod "261e46ac-b43f-490f-bdbe-8181cbecdf0d" (UID: "261e46ac-b43f-490f-bdbe-8181cbecdf0d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.857259 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util" (OuterVolumeSpecName: "util") pod "261e46ac-b43f-490f-bdbe-8181cbecdf0d" (UID: "261e46ac-b43f-490f-bdbe-8181cbecdf0d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.861829 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp" (OuterVolumeSpecName: "kube-api-access-xlptp") pod "261e46ac-b43f-490f-bdbe-8181cbecdf0d" (UID: "261e46ac-b43f-490f-bdbe-8181cbecdf0d"). InnerVolumeSpecName "kube-api-access-xlptp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.948561 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.948638 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/261e46ac-b43f-490f-bdbe-8181cbecdf0d-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:34 crc kubenswrapper[4847]: I0218 00:38:34.948662 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlptp\" (UniqueName: \"kubernetes.io/projected/261e46ac-b43f-490f-bdbe-8181cbecdf0d-kube-api-access-xlptp\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:35 crc kubenswrapper[4847]: I0218 00:38:35.279993 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" event={"ID":"261e46ac-b43f-490f-bdbe-8181cbecdf0d","Type":"ContainerDied","Data":"c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a"} Feb 18 00:38:35 crc kubenswrapper[4847]: I0218 00:38:35.280055 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13b3fc922ae8a73ea12cde58a4e68c230e77c28e218bc0c6682d9e12c41af3a" Feb 18 00:38:35 crc kubenswrapper[4847]: I0218 00:38:35.280095 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp" Feb 18 00:38:35 crc kubenswrapper[4847]: I0218 00:38:35.312295 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:35 crc kubenswrapper[4847]: I0218 00:38:35.361743 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:36 crc kubenswrapper[4847]: I0218 00:38:36.563894 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.293133 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6zsk5" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="registry-server" containerID="cri-o://df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0" gracePeriod=2 Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.658430 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.788235 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content\") pod \"de9322f7-7dc0-4cbe-b171-e322f079f377\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.788419 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpc6w\" (UniqueName: \"kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w\") pod \"de9322f7-7dc0-4cbe-b171-e322f079f377\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.788534 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities\") pod \"de9322f7-7dc0-4cbe-b171-e322f079f377\" (UID: \"de9322f7-7dc0-4cbe-b171-e322f079f377\") " Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.789765 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities" (OuterVolumeSpecName: "utilities") pod "de9322f7-7dc0-4cbe-b171-e322f079f377" (UID: "de9322f7-7dc0-4cbe-b171-e322f079f377"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.796258 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w" (OuterVolumeSpecName: "kube-api-access-lpc6w") pod "de9322f7-7dc0-4cbe-b171-e322f079f377" (UID: "de9322f7-7dc0-4cbe-b171-e322f079f377"). InnerVolumeSpecName "kube-api-access-lpc6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.889939 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.889985 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpc6w\" (UniqueName: \"kubernetes.io/projected/de9322f7-7dc0-4cbe-b171-e322f079f377-kube-api-access-lpc6w\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.982553 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de9322f7-7dc0-4cbe-b171-e322f079f377" (UID: "de9322f7-7dc0-4cbe-b171-e322f079f377"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:37 crc kubenswrapper[4847]: I0218 00:38:37.990488 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de9322f7-7dc0-4cbe-b171-e322f079f377-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.300947 4847 generic.go:334] "Generic (PLEG): container finished" podID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerID="df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0" exitCode=0 Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.300999 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerDied","Data":"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0"} Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.301028 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6zsk5" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.301053 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6zsk5" event={"ID":"de9322f7-7dc0-4cbe-b171-e322f079f377","Type":"ContainerDied","Data":"606bc2e1a4ee8ed936478345e9b6e7baf2a26a1092e16c612982f2859673eb6a"} Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.301073 4847 scope.go:117] "RemoveContainer" containerID="df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.319345 4847 scope.go:117] "RemoveContainer" containerID="60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.368853 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.372304 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6zsk5"] Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.373160 4847 scope.go:117] "RemoveContainer" containerID="adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.397038 4847 scope.go:117] "RemoveContainer" containerID="df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0" Feb 18 00:38:38 crc kubenswrapper[4847]: E0218 00:38:38.397456 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0\": container with ID starting with df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0 not found: ID does not exist" containerID="df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.397498 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0"} err="failed to get container status \"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0\": rpc error: code = NotFound desc = could not find container \"df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0\": container with ID starting with df0aac57506b70ee43c7e649f87547bcdf747c8756918232810b3e90449578d0 not found: ID does not exist" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.397526 4847 scope.go:117] "RemoveContainer" containerID="60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541" Feb 18 00:38:38 crc kubenswrapper[4847]: E0218 00:38:38.397794 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541\": container with ID starting with 60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541 not found: ID does not exist" containerID="60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.397877 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541"} err="failed to get container status \"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541\": rpc error: code = NotFound desc = could not find container \"60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541\": container with ID starting with 60009c2f512c67427043a07ecd0c29c1c8c738fea0b00312dea6bcd844a66541 not found: ID does not exist" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.397948 4847 scope.go:117] "RemoveContainer" containerID="adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9" Feb 18 00:38:38 crc kubenswrapper[4847]: E0218 00:38:38.398216 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9\": container with ID starting with adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9 not found: ID does not exist" containerID="adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9" Feb 18 00:38:38 crc kubenswrapper[4847]: I0218 00:38:38.398241 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9"} err="failed to get container status \"adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9\": rpc error: code = NotFound desc = could not find container \"adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9\": container with ID starting with adf6f35e23ba93d6fd485dcc73d3c08c1421d46cf8a828461a4d669ff37aece9 not found: ID does not exist" Feb 18 00:38:39 crc kubenswrapper[4847]: I0218 00:38:39.412730 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" path="/var/lib/kubelet/pods/de9322f7-7dc0-4cbe-b171-e322f079f377/volumes" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.772504 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.772851 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="registry-server" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.772873 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="registry-server" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.772890 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="pull" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.772903 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="pull" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.772918 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="util" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.772930 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="util" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.772953 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.772965 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.772991 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="util" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773003 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="util" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.773020 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773032 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.773052 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="extract-utilities" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773064 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="extract-utilities" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.773085 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="pull" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773096 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="pull" Feb 18 00:38:40 crc kubenswrapper[4847]: E0218 00:38:40.773115 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="extract-content" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773126 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="extract-content" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773288 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773326 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="261e46ac-b43f-490f-bdbe-8181cbecdf0d" containerName="extract" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.773345 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="de9322f7-7dc0-4cbe-b171-e322f079f377" containerName="registry-server" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.774697 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.791906 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.934033 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.934124 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5l69\" (UniqueName: \"kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:40 crc kubenswrapper[4847]: I0218 00:38:40.934173 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.035297 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.035382 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.035419 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5l69\" (UniqueName: \"kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.035894 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.035959 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.060628 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5l69\" (UniqueName: \"kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69\") pod \"redhat-marketplace-h9thh\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.100436 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:41 crc kubenswrapper[4847]: I0218 00:38:41.389094 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.209024 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk"] Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.210187 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.212231 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.212420 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.214447 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.218217 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-jbw77" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.218998 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.219142 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.232134 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk"] Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.336743 4847 generic.go:334] "Generic (PLEG): container finished" podID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerID="86d55a595e9e9dd01b14fbbce6df49ce5f6e230a3260a731deafc4c0008f408b" exitCode=0 Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.336782 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerDied","Data":"86d55a595e9e9dd01b14fbbce6df49ce5f6e230a3260a731deafc4c0008f408b"} Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.336807 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerStarted","Data":"bc9a46314943830c28a54652cd508f3be4257a224960db737ebebed3cc1bddd2"} Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.353666 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.353711 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktdb\" (UniqueName: \"kubernetes.io/projected/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-kube-api-access-fktdb\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.353742 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-webhook-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.353771 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-manager-config\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.353789 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-apiservice-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.455349 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-webhook-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.455641 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-manager-config\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.455736 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-apiservice-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.455860 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.455957 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fktdb\" (UniqueName: \"kubernetes.io/projected/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-kube-api-access-fktdb\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.456630 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-manager-config\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.464848 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-apiservice-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.465539 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-webhook-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.474420 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.475402 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fktdb\" (UniqueName: \"kubernetes.io/projected/c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe-kube-api-access-fktdb\") pod \"loki-operator-controller-manager-6f64cb577-8nrqk\" (UID: \"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe\") " pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.522027 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:42 crc kubenswrapper[4847]: I0218 00:38:42.841547 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk"] Feb 18 00:38:43 crc kubenswrapper[4847]: I0218 00:38:43.348348 4847 generic.go:334] "Generic (PLEG): container finished" podID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerID="23f95756f1a9f02431c51f1eb86a57e32c720696c865b856678e15f9754ed01e" exitCode=0 Feb 18 00:38:43 crc kubenswrapper[4847]: I0218 00:38:43.348382 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerDied","Data":"23f95756f1a9f02431c51f1eb86a57e32c720696c865b856678e15f9754ed01e"} Feb 18 00:38:43 crc kubenswrapper[4847]: I0218 00:38:43.350366 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" event={"ID":"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe","Type":"ContainerStarted","Data":"92020263e5a807af189495734b3620f5a8c76573abc0383b60132e23ff3dbd44"} Feb 18 00:38:44 crc kubenswrapper[4847]: I0218 00:38:44.374742 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerStarted","Data":"b2d96977c20ea9cc3ed20591f710bf1cdb64c7c0ba6aeff1d73272eb84ee37fe"} Feb 18 00:38:44 crc kubenswrapper[4847]: I0218 00:38:44.391657 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h9thh" podStartSLOduration=2.991556375 podStartE2EDuration="4.391642047s" podCreationTimestamp="2026-02-18 00:38:40 +0000 UTC" firstStartedPulling="2026-02-18 00:38:42.338181426 +0000 UTC m=+795.715532368" lastFinishedPulling="2026-02-18 00:38:43.738267058 +0000 UTC m=+797.115618040" observedRunningTime="2026-02-18 00:38:44.3892395 +0000 UTC m=+797.766590442" watchObservedRunningTime="2026-02-18 00:38:44.391642047 +0000 UTC m=+797.768992989" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.398746 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9ht6n"] Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.399748 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.401214 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-hxxb9" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.401490 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.403484 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.409669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" event={"ID":"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe","Type":"ContainerStarted","Data":"5d5d514b37c2ed88b203334b8e3867fcf8087db529375ba6ded69ef3511504a4"} Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.415258 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9ht6n"] Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.547845 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrqz\" (UniqueName: \"kubernetes.io/projected/3833d12b-09b8-4c7c-8f7b-a5d7eec27940-kube-api-access-gkrqz\") pod \"cluster-logging-operator-c769fd969-9ht6n\" (UID: \"3833d12b-09b8-4c7c-8f7b-a5d7eec27940\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.649516 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrqz\" (UniqueName: \"kubernetes.io/projected/3833d12b-09b8-4c7c-8f7b-a5d7eec27940-kube-api-access-gkrqz\") pod \"cluster-logging-operator-c769fd969-9ht6n\" (UID: \"3833d12b-09b8-4c7c-8f7b-a5d7eec27940\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.667112 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrqz\" (UniqueName: \"kubernetes.io/projected/3833d12b-09b8-4c7c-8f7b-a5d7eec27940-kube-api-access-gkrqz\") pod \"cluster-logging-operator-c769fd969-9ht6n\" (UID: \"3833d12b-09b8-4c7c-8f7b-a5d7eec27940\") " pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.715354 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" Feb 18 00:38:48 crc kubenswrapper[4847]: I0218 00:38:48.942498 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-9ht6n"] Feb 18 00:38:49 crc kubenswrapper[4847]: I0218 00:38:49.433322 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" event={"ID":"3833d12b-09b8-4c7c-8f7b-a5d7eec27940","Type":"ContainerStarted","Data":"02cdb8800b221089d96def95197ea843a1ee28657fc42ef3b0f87fddca53e040"} Feb 18 00:38:51 crc kubenswrapper[4847]: I0218 00:38:51.104288 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:51 crc kubenswrapper[4847]: I0218 00:38:51.104650 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:51 crc kubenswrapper[4847]: I0218 00:38:51.167563 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:51 crc kubenswrapper[4847]: I0218 00:38:51.502954 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:53 crc kubenswrapper[4847]: I0218 00:38:53.955152 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:53 crc kubenswrapper[4847]: I0218 00:38:53.955585 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h9thh" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="registry-server" containerID="cri-o://b2d96977c20ea9cc3ed20591f710bf1cdb64c7c0ba6aeff1d73272eb84ee37fe" gracePeriod=2 Feb 18 00:38:54 crc kubenswrapper[4847]: I0218 00:38:54.471991 4847 generic.go:334] "Generic (PLEG): container finished" podID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerID="b2d96977c20ea9cc3ed20591f710bf1cdb64c7c0ba6aeff1d73272eb84ee37fe" exitCode=0 Feb 18 00:38:54 crc kubenswrapper[4847]: I0218 00:38:54.472035 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerDied","Data":"b2d96977c20ea9cc3ed20591f710bf1cdb64c7c0ba6aeff1d73272eb84ee37fe"} Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.307648 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.405035 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5l69\" (UniqueName: \"kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69\") pod \"98627cac-92e2-4f6f-83a9-0c77913b6867\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.405148 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities\") pod \"98627cac-92e2-4f6f-83a9-0c77913b6867\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.405204 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content\") pod \"98627cac-92e2-4f6f-83a9-0c77913b6867\" (UID: \"98627cac-92e2-4f6f-83a9-0c77913b6867\") " Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.406388 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities" (OuterVolumeSpecName: "utilities") pod "98627cac-92e2-4f6f-83a9-0c77913b6867" (UID: "98627cac-92e2-4f6f-83a9-0c77913b6867"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.413037 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69" (OuterVolumeSpecName: "kube-api-access-j5l69") pod "98627cac-92e2-4f6f-83a9-0c77913b6867" (UID: "98627cac-92e2-4f6f-83a9-0c77913b6867"). InnerVolumeSpecName "kube-api-access-j5l69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.427827 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98627cac-92e2-4f6f-83a9-0c77913b6867" (UID: "98627cac-92e2-4f6f-83a9-0c77913b6867"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.491557 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" event={"ID":"c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe","Type":"ContainerStarted","Data":"fcee9d69d66d25def84684a311c0d559b0e8375d94035682b32e726130dd9051"} Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.491928 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.493085 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" event={"ID":"3833d12b-09b8-4c7c-8f7b-a5d7eec27940","Type":"ContainerStarted","Data":"0b02bad02905e4bc23885e784ace3664b78f010795eace74f14add029f8985ea"} Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.493552 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.495050 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h9thh" event={"ID":"98627cac-92e2-4f6f-83a9-0c77913b6867","Type":"ContainerDied","Data":"bc9a46314943830c28a54652cd508f3be4257a224960db737ebebed3cc1bddd2"} Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.495083 4847 scope.go:117] "RemoveContainer" containerID="b2d96977c20ea9cc3ed20591f710bf1cdb64c7c0ba6aeff1d73272eb84ee37fe" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.495132 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h9thh" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.506660 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5l69\" (UniqueName: \"kubernetes.io/projected/98627cac-92e2-4f6f-83a9-0c77913b6867-kube-api-access-j5l69\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.506683 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.506695 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98627cac-92e2-4f6f-83a9-0c77913b6867-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.516822 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-6f64cb577-8nrqk" podStartSLOduration=1.378604894 podStartE2EDuration="15.516802847s" podCreationTimestamp="2026-02-18 00:38:42 +0000 UTC" firstStartedPulling="2026-02-18 00:38:42.857371126 +0000 UTC m=+796.234722068" lastFinishedPulling="2026-02-18 00:38:56.995569079 +0000 UTC m=+810.372920021" observedRunningTime="2026-02-18 00:38:57.514890872 +0000 UTC m=+810.892241814" watchObservedRunningTime="2026-02-18 00:38:57.516802847 +0000 UTC m=+810.894153789" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.517550 4847 scope.go:117] "RemoveContainer" containerID="23f95756f1a9f02431c51f1eb86a57e32c720696c865b856678e15f9754ed01e" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.533437 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.538674 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h9thh"] Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.549237 4847 scope.go:117] "RemoveContainer" containerID="86d55a595e9e9dd01b14fbbce6df49ce5f6e230a3260a731deafc4c0008f408b" Feb 18 00:38:57 crc kubenswrapper[4847]: I0218 00:38:57.590262 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-9ht6n" podStartSLOduration=1.478994155 podStartE2EDuration="9.590237955s" podCreationTimestamp="2026-02-18 00:38:48 +0000 UTC" firstStartedPulling="2026-02-18 00:38:48.952982435 +0000 UTC m=+802.330333387" lastFinishedPulling="2026-02-18 00:38:57.064226245 +0000 UTC m=+810.441577187" observedRunningTime="2026-02-18 00:38:57.585617287 +0000 UTC m=+810.962968239" watchObservedRunningTime="2026-02-18 00:38:57.590237955 +0000 UTC m=+810.967588907" Feb 18 00:38:59 crc kubenswrapper[4847]: I0218 00:38:59.411433 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" path="/var/lib/kubelet/pods/98627cac-92e2-4f6f-83a9-0c77913b6867/volumes" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.063826 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 18 00:39:02 crc kubenswrapper[4847]: E0218 00:39:02.064795 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="extract-content" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.064817 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="extract-content" Feb 18 00:39:02 crc kubenswrapper[4847]: E0218 00:39:02.064843 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="extract-utilities" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.064856 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="extract-utilities" Feb 18 00:39:02 crc kubenswrapper[4847]: E0218 00:39:02.064869 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="registry-server" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.064881 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="registry-server" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.065063 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="98627cac-92e2-4f6f-83a9-0c77913b6867" containerName="registry-server" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.065729 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.068647 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.069338 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.075568 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.166808 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.166881 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxfkn\" (UniqueName: \"kubernetes.io/projected/b2021d84-eebd-4d70-8bff-09717c786f61-kube-api-access-pxfkn\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.180221 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.182105 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.192113 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.267871 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km744\" (UniqueName: \"kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.267942 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.267967 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.267991 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.268066 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxfkn\" (UniqueName: \"kubernetes.io/projected/b2021d84-eebd-4d70-8bff-09717c786f61-kube-api-access-pxfkn\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.272925 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.272968 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02b8e225353c44b8768d3683a477cb0d23d2921e0b58545c303d26448641aeb6/globalmount\"" pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.288791 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxfkn\" (UniqueName: \"kubernetes.io/projected/b2021d84-eebd-4d70-8bff-09717c786f61-kube-api-access-pxfkn\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.300091 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0e4c99a8-85f0-4527-a236-3bec2cb0655a\") pod \"minio\" (UID: \"b2021d84-eebd-4d70-8bff-09717c786f61\") " pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.369690 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km744\" (UniqueName: \"kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.369794 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.369841 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.370313 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.370475 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.389373 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km744\" (UniqueName: \"kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744\") pod \"community-operators-rpn9l\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.411105 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.504282 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.617181 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 18 00:39:02 crc kubenswrapper[4847]: I0218 00:39:02.799935 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:02 crc kubenswrapper[4847]: W0218 00:39:02.804241 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f641c3f_2a1e_4023_a8b6_e476eaab95e5.slice/crio-6c7f12e707285c09001f1c847b4db1ec86208a4eb7361ce7b7e99da26d5cbbe2 WatchSource:0}: Error finding container 6c7f12e707285c09001f1c847b4db1ec86208a4eb7361ce7b7e99da26d5cbbe2: Status 404 returned error can't find the container with id 6c7f12e707285c09001f1c847b4db1ec86208a4eb7361ce7b7e99da26d5cbbe2 Feb 18 00:39:03 crc kubenswrapper[4847]: I0218 00:39:03.566101 4847 generic.go:334] "Generic (PLEG): container finished" podID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerID="b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05" exitCode=0 Feb 18 00:39:03 crc kubenswrapper[4847]: I0218 00:39:03.566669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerDied","Data":"b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05"} Feb 18 00:39:03 crc kubenswrapper[4847]: I0218 00:39:03.566808 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerStarted","Data":"6c7f12e707285c09001f1c847b4db1ec86208a4eb7361ce7b7e99da26d5cbbe2"} Feb 18 00:39:03 crc kubenswrapper[4847]: I0218 00:39:03.574230 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b2021d84-eebd-4d70-8bff-09717c786f61","Type":"ContainerStarted","Data":"a48c666a2c1a1ba1e1b6f807b9b032096b530865d6356113e4a7da7c9e5589fc"} Feb 18 00:39:06 crc kubenswrapper[4847]: I0218 00:39:06.603018 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b2021d84-eebd-4d70-8bff-09717c786f61","Type":"ContainerStarted","Data":"98da109c3cbdeaa54dc88dc063527c821576e176ed6c863d432d8383fba20651"} Feb 18 00:39:06 crc kubenswrapper[4847]: I0218 00:39:06.606206 4847 generic.go:334] "Generic (PLEG): container finished" podID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerID="1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b" exitCode=0 Feb 18 00:39:06 crc kubenswrapper[4847]: I0218 00:39:06.606246 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerDied","Data":"1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b"} Feb 18 00:39:06 crc kubenswrapper[4847]: I0218 00:39:06.629800 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.417435057 podStartE2EDuration="7.629783595s" podCreationTimestamp="2026-02-18 00:38:59 +0000 UTC" firstStartedPulling="2026-02-18 00:39:02.639795453 +0000 UTC m=+816.017146395" lastFinishedPulling="2026-02-18 00:39:05.852143941 +0000 UTC m=+819.229494933" observedRunningTime="2026-02-18 00:39:06.626131639 +0000 UTC m=+820.003482621" watchObservedRunningTime="2026-02-18 00:39:06.629783595 +0000 UTC m=+820.007134537" Feb 18 00:39:07 crc kubenswrapper[4847]: I0218 00:39:07.613541 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerStarted","Data":"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d"} Feb 18 00:39:07 crc kubenswrapper[4847]: I0218 00:39:07.635000 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpn9l" podStartSLOduration=2.394933474 podStartE2EDuration="5.634982554s" podCreationTimestamp="2026-02-18 00:39:02 +0000 UTC" firstStartedPulling="2026-02-18 00:39:03.796337785 +0000 UTC m=+817.173688727" lastFinishedPulling="2026-02-18 00:39:07.036386835 +0000 UTC m=+820.413737807" observedRunningTime="2026-02-18 00:39:07.631146294 +0000 UTC m=+821.008497236" watchObservedRunningTime="2026-02-18 00:39:07.634982554 +0000 UTC m=+821.012333496" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.003012 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.005118 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.008202 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.011408 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.011457 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.011475 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-tdwrt" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.016100 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.043655 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.144000 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.144058 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwc9r\" (UniqueName: \"kubernetes.io/projected/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-kube-api-access-kwc9r\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.144088 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.144131 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-config\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.144168 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.246712 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.246966 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwc9r\" (UniqueName: \"kubernetes.io/projected/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-kube-api-access-kwc9r\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.246999 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.247050 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-config\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.247089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.247910 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.248519 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-config\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.252693 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.252791 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.258161 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.260428 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.260667 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.260838 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.264199 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.291449 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwc9r\" (UniqueName: \"kubernetes.io/projected/9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843-kube-api-access-kwc9r\") pod \"logging-loki-distributor-5d5548c9f5-x76k8\" (UID: \"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.333833 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.337614 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.337755 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.338950 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.340857 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.341246 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.362382 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.446651 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.447919 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.449355 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.449715 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.450582 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.451013 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.451977 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454031 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5wvs\" (UniqueName: \"kubernetes.io/projected/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-kube-api-access-l5wvs\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454081 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454112 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454139 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454166 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454244 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-rbac\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454283 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454316 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454342 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454372 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454393 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tenants\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454415 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454436 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-config\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454458 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfpsb\" (UniqueName: \"kubernetes.io/projected/777cf1df-2302-473d-87b1-893df3304f21-kube-api-access-gfpsb\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454485 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454511 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454540 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-config\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454567 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.454670 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4n9\" (UniqueName: \"kubernetes.io/projected/810922d4-8577-496f-ad3a-a49c2122d91d-kube-api-access-pr4n9\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.506368 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.506390 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.516276 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.519082 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.521122 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-4pt8b" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.524169 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.527972 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555296 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4n9\" (UniqueName: \"kubernetes.io/projected/810922d4-8577-496f-ad3a-a49c2122d91d-kube-api-access-pr4n9\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555354 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5wvs\" (UniqueName: \"kubernetes.io/projected/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-kube-api-access-l5wvs\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555378 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555395 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555416 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555435 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555455 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-rbac\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555474 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555495 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555519 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555545 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tenants\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555562 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555580 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555612 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-config\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555629 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfpsb\" (UniqueName: \"kubernetes.io/projected/777cf1df-2302-473d-87b1-893df3304f21-kube-api-access-gfpsb\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555649 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555664 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555687 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-config\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.555707 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.556521 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.557669 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.558427 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.559980 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.560126 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: E0218 00:39:12.560935 4847 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 18 00:39:12 crc kubenswrapper[4847]: E0218 00:39:12.560987 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret podName:777cf1df-2302-473d-87b1-893df3304f21 nodeName:}" failed. No retries permitted until 2026-02-18 00:39:13.060971294 +0000 UTC m=+826.438322236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret") pod "logging-loki-gateway-9c654d8fb-r2v6d" (UID: "777cf1df-2302-473d-87b1-893df3304f21") : secret "logging-loki-gateway-http" not found Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.563148 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.563485 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-config\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.564620 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tenants\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.566250 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-rbac\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.566647 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.568025 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.571899 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.572357 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.572541 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/810922d4-8577-496f-ad3a-a49c2122d91d-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.575311 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.576103 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810922d4-8577-496f-ad3a-a49c2122d91d-config\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.577178 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfpsb\" (UniqueName: \"kubernetes.io/projected/777cf1df-2302-473d-87b1-893df3304f21-kube-api-access-gfpsb\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.577186 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4n9\" (UniqueName: \"kubernetes.io/projected/810922d4-8577-496f-ad3a-a49c2122d91d-kube-api-access-pr4n9\") pod \"logging-loki-query-frontend-6d6859c548-wsvv2\" (UID: \"810922d4-8577-496f-ad3a-a49c2122d91d\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.588034 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5wvs\" (UniqueName: \"kubernetes.io/projected/ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753-kube-api-access-l5wvs\") pod \"logging-loki-querier-76bf7b6d45-wnr8f\" (UID: \"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.604857 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.634916 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.646754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" event={"ID":"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843","Type":"ContainerStarted","Data":"fbb5d2817bf6b2174a2baceec028156cac7aa42491ab7f852a1ca7a6df898ff0"} Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.651155 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657488 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-rbac\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657639 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkn5g\" (UniqueName: \"kubernetes.io/projected/aebf8b18-099f-4bfe-88ce-a34461bb4b51-kube-api-access-jkn5g\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657667 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657746 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657776 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tenants\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657876 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.657905 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.658054 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.684750 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.784753 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-rbac\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785189 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkn5g\" (UniqueName: \"kubernetes.io/projected/aebf8b18-099f-4bfe-88ce-a34461bb4b51-kube-api-access-jkn5g\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785236 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785310 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785379 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tenants\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785457 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785521 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.785680 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.787206 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.789915 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-rbac\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.791157 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-lokistack-gateway\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.791618 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-ca-bundle\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.796695 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tenants\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.809384 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkn5g\" (UniqueName: \"kubernetes.io/projected/aebf8b18-099f-4bfe-88ce-a34461bb4b51-kube-api-access-jkn5g\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.813976 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.818226 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.823922 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/aebf8b18-099f-4bfe-88ce-a34461bb4b51-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-9c654d8fb-tcxtw\" (UID: \"aebf8b18-099f-4bfe-88ce-a34461bb4b51\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.847807 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:12 crc kubenswrapper[4847]: I0218 00:39:12.885903 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f"] Feb 18 00:39:12 crc kubenswrapper[4847]: W0218 00:39:12.896518 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec7e43fc_d7e7_4bb1_a7cc_a62d2be0a753.slice/crio-89f6dc7f5af70a6f4bc2e5716a06a97bfea72ed48c304a937c4df79f06550c77 WatchSource:0}: Error finding container 89f6dc7f5af70a6f4bc2e5716a06a97bfea72ed48c304a937c4df79f06550c77: Status 404 returned error can't find the container with id 89f6dc7f5af70a6f4bc2e5716a06a97bfea72ed48c304a937c4df79f06550c77 Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.091858 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.097682 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/777cf1df-2302-473d-87b1-893df3304f21-tls-secret\") pod \"logging-loki-gateway-9c654d8fb-r2v6d\" (UID: \"777cf1df-2302-473d-87b1-893df3304f21\") " pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:13 crc kubenswrapper[4847]: W0218 00:39:13.125888 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810922d4_8577_496f_ad3a_a49c2122d91d.slice/crio-6eea8f5b3f229c560ada6bed8b2b84965f06a234757685201d562e8f02375f3e WatchSource:0}: Error finding container 6eea8f5b3f229c560ada6bed8b2b84965f06a234757685201d562e8f02375f3e: Status 404 returned error can't find the container with id 6eea8f5b3f229c560ada6bed8b2b84965f06a234757685201d562e8f02375f3e Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.126380 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.287241 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.288311 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.291378 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.292139 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.296188 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.309533 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.310475 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.314123 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.314533 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.314708 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.347719 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.362844 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395619 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395697 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395728 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-config\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395764 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bqjz\" (UniqueName: \"kubernetes.io/projected/6660c016-1faa-43e2-904c-3e8db37f6b3d-kube-api-access-6bqjz\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395811 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-49955224-8540-4105-a0e7-b53663f5d94c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49955224-8540-4105-a0e7-b53663f5d94c\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395834 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395893 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.395919 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.414955 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.415969 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.417658 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.419389 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.435264 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497545 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497639 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-49955224-8540-4105-a0e7-b53663f5d94c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49955224-8540-4105-a0e7-b53663f5d94c\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497665 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497686 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497716 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-config\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497736 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497759 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497780 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497799 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrg2\" (UniqueName: \"kubernetes.io/projected/b1105da5-f79a-4638-a2cd-9e9219b02682-kube-api-access-nmrg2\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.497847 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.498852 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.498890 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.498909 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-config\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.498936 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bqjz\" (UniqueName: \"kubernetes.io/projected/6660c016-1faa-43e2-904c-3e8db37f6b3d-kube-api-access-6bqjz\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.498991 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.500577 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6660c016-1faa-43e2-904c-3e8db37f6b3d-config\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.501399 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.501433 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-49955224-8540-4105-a0e7-b53663f5d94c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49955224-8540-4105-a0e7-b53663f5d94c\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3abbb31e76deb8fe9c41cb33fef0a5fc37e131302d4a96e076df48081b46aa59/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.502014 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.502045 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ccd605067f3794f45786fa4cd9b4b3b80b8b2ba9f5120d3383e13f81a9b5aa78/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.503913 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.506392 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.507222 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6660c016-1faa-43e2-904c-3e8db37f6b3d-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.522209 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bqjz\" (UniqueName: \"kubernetes.io/projected/6660c016-1faa-43e2-904c-3e8db37f6b3d-kube-api-access-6bqjz\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.522281 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-49955224-8540-4105-a0e7-b53663f5d94c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-49955224-8540-4105-a0e7-b53663f5d94c\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.530368 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-56a12d72-ea0e-48f7-a8b8-e735ad954557\") pod \"logging-loki-ingester-0\" (UID: \"6660c016-1faa-43e2-904c-3e8db37f6b3d\") " pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600320 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdqbz\" (UniqueName: \"kubernetes.io/projected/29b3aa92-5b12-457c-b25a-27aa73aa8c37-kube-api-access-wdqbz\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600378 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600443 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600476 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-config\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600500 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600524 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600546 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600595 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600632 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600650 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrg2\" (UniqueName: \"kubernetes.io/projected/b1105da5-f79a-4638-a2cd-9e9219b02682-kube-api-access-nmrg2\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600680 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-config\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600697 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600719 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.600815 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.601846 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-config\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.602118 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.602759 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.602779 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c1dbe74b29b2d5748e9ea55e1be059257d3d9613df85944fcb68b8ed7ac7a9da/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.604771 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.605370 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.607070 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b1105da5-f79a-4638-a2cd-9e9219b02682-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.626795 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrg2\" (UniqueName: \"kubernetes.io/projected/b1105da5-f79a-4638-a2cd-9e9219b02682-kube-api-access-nmrg2\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.628360 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.641997 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-455309a9-cb6d-440d-9ef3-c2ad538d9930\") pod \"logging-loki-compactor-0\" (UID: \"b1105da5-f79a-4638-a2cd-9e9219b02682\") " pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.653074 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" event={"ID":"aebf8b18-099f-4bfe-88ce-a34461bb4b51","Type":"ContainerStarted","Data":"369c4f7710cd5e21265d38411abc1fb3287006a669e339ad13a5743eefd128fd"} Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.654064 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" event={"ID":"810922d4-8577-496f-ad3a-a49c2122d91d","Type":"ContainerStarted","Data":"6eea8f5b3f229c560ada6bed8b2b84965f06a234757685201d562e8f02375f3e"} Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.654980 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" event={"ID":"777cf1df-2302-473d-87b1-893df3304f21","Type":"ContainerStarted","Data":"b97de6be07d3cb2b7201ce151b93213acfc1c6286b8fc7a471744cb360bc4610"} Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.655863 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" event={"ID":"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753","Type":"ContainerStarted","Data":"89f6dc7f5af70a6f4bc2e5716a06a97bfea72ed48c304a937c4df79f06550c77"} Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.668399 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.677053 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702547 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702630 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-config\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702655 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702672 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702710 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdqbz\" (UniqueName: \"kubernetes.io/projected/29b3aa92-5b12-457c-b25a-27aa73aa8c37-kube-api-access-wdqbz\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702777 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.702798 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.704378 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.705128 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b3aa92-5b12-457c-b25a-27aa73aa8c37-config\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.705550 4847 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.705629 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5b27d305d772f080e412e5aee3c0c6e320bc972c57f87bb82a812d0ff51d5428/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.706736 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.707578 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.722431 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/29b3aa92-5b12-457c-b25a-27aa73aa8c37-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.725758 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2bf4e4eb-cc6b-4541-ad7a-a91678596ed8\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.729446 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdqbz\" (UniqueName: \"kubernetes.io/projected/29b3aa92-5b12-457c-b25a-27aa73aa8c37-kube-api-access-wdqbz\") pod \"logging-loki-index-gateway-0\" (UID: \"29b3aa92-5b12-457c-b25a-27aa73aa8c37\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.761314 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.934493 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 18 00:39:13 crc kubenswrapper[4847]: I0218 00:39:13.965863 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 18 00:39:14 crc kubenswrapper[4847]: I0218 00:39:14.322789 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 18 00:39:14 crc kubenswrapper[4847]: W0218 00:39:14.331400 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29b3aa92_5b12_457c_b25a_27aa73aa8c37.slice/crio-b8888ec015102670903010b2c62465bb563c2b07a5f4d56bf0d183731bd2d5fc WatchSource:0}: Error finding container b8888ec015102670903010b2c62465bb563c2b07a5f4d56bf0d183731bd2d5fc: Status 404 returned error can't find the container with id b8888ec015102670903010b2c62465bb563c2b07a5f4d56bf0d183731bd2d5fc Feb 18 00:39:14 crc kubenswrapper[4847]: I0218 00:39:14.663141 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"29b3aa92-5b12-457c-b25a-27aa73aa8c37","Type":"ContainerStarted","Data":"b8888ec015102670903010b2c62465bb563c2b07a5f4d56bf0d183731bd2d5fc"} Feb 18 00:39:14 crc kubenswrapper[4847]: I0218 00:39:14.665531 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b1105da5-f79a-4638-a2cd-9e9219b02682","Type":"ContainerStarted","Data":"b919dae6f85ffd4f5dcc9053dfd80942801c071493dc3e4e26dcc359a9d4d50e"} Feb 18 00:39:14 crc kubenswrapper[4847]: I0218 00:39:14.666729 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpn9l" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="registry-server" containerID="cri-o://204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d" gracePeriod=2 Feb 18 00:39:14 crc kubenswrapper[4847]: I0218 00:39:14.666795 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"6660c016-1faa-43e2-904c-3e8db37f6b3d","Type":"ContainerStarted","Data":"9f8f6fd49de44863cc65031f4b907d21aa79ad9101afb07c36d1d043fb0b0937"} Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.025914 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.125721 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities\") pod \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.125816 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content\") pod \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.125851 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km744\" (UniqueName: \"kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744\") pod \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\" (UID: \"1f641c3f-2a1e-4023-a8b6-e476eaab95e5\") " Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.126998 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities" (OuterVolumeSpecName: "utilities") pod "1f641c3f-2a1e-4023-a8b6-e476eaab95e5" (UID: "1f641c3f-2a1e-4023-a8b6-e476eaab95e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.131150 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744" (OuterVolumeSpecName: "kube-api-access-km744") pod "1f641c3f-2a1e-4023-a8b6-e476eaab95e5" (UID: "1f641c3f-2a1e-4023-a8b6-e476eaab95e5"). InnerVolumeSpecName "kube-api-access-km744". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.183013 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f641c3f-2a1e-4023-a8b6-e476eaab95e5" (UID: "1f641c3f-2a1e-4023-a8b6-e476eaab95e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.228134 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.228167 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.228177 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km744\" (UniqueName: \"kubernetes.io/projected/1f641c3f-2a1e-4023-a8b6-e476eaab95e5-kube-api-access-km744\") on node \"crc\" DevicePath \"\"" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.677175 4847 generic.go:334] "Generic (PLEG): container finished" podID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerID="204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d" exitCode=0 Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.677214 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerDied","Data":"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d"} Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.677239 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn9l" event={"ID":"1f641c3f-2a1e-4023-a8b6-e476eaab95e5","Type":"ContainerDied","Data":"6c7f12e707285c09001f1c847b4db1ec86208a4eb7361ce7b7e99da26d5cbbe2"} Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.677259 4847 scope.go:117] "RemoveContainer" containerID="204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.677270 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn9l" Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.698383 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.703199 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpn9l"] Feb 18 00:39:15 crc kubenswrapper[4847]: I0218 00:39:15.993856 4847 scope.go:117] "RemoveContainer" containerID="1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.694937 4847 scope.go:117] "RemoveContainer" containerID="b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.741724 4847 scope.go:117] "RemoveContainer" containerID="204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d" Feb 18 00:39:16 crc kubenswrapper[4847]: E0218 00:39:16.742332 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d\": container with ID starting with 204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d not found: ID does not exist" containerID="204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.742388 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d"} err="failed to get container status \"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d\": rpc error: code = NotFound desc = could not find container \"204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d\": container with ID starting with 204470387b1de84a298a45a2054f33e79bef8417a901540aeb6de8093b70d83d not found: ID does not exist" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.742426 4847 scope.go:117] "RemoveContainer" containerID="1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b" Feb 18 00:39:16 crc kubenswrapper[4847]: E0218 00:39:16.742988 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b\": container with ID starting with 1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b not found: ID does not exist" containerID="1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.743045 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b"} err="failed to get container status \"1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b\": rpc error: code = NotFound desc = could not find container \"1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b\": container with ID starting with 1c302e66bb64f11a9c105670a23ac6740e405edc4e2cfbdc6316991dd09bd98b not found: ID does not exist" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.743080 4847 scope.go:117] "RemoveContainer" containerID="b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05" Feb 18 00:39:16 crc kubenswrapper[4847]: E0218 00:39:16.743469 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05\": container with ID starting with b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05 not found: ID does not exist" containerID="b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05" Feb 18 00:39:16 crc kubenswrapper[4847]: I0218 00:39:16.743510 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05"} err="failed to get container status \"b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05\": rpc error: code = NotFound desc = could not find container \"b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05\": container with ID starting with b9a430e55f975d70ed6e049ad8d32188800d4b9882d80ec0b988ca2fee8fec05 not found: ID does not exist" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.414243 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" path="/var/lib/kubelet/pods/1f641c3f-2a1e-4023-a8b6-e476eaab95e5/volumes" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.703155 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" event={"ID":"777cf1df-2302-473d-87b1-893df3304f21","Type":"ContainerStarted","Data":"beaa6566b16a6dc9df5b7c8704d633c73cd09ee01f3ef7ea59defa2f8339f542"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.704865 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" event={"ID":"810922d4-8577-496f-ad3a-a49c2122d91d","Type":"ContainerStarted","Data":"af45607aae490accfeeace4e2a4e72e16d005857ed459d39b1d75f019d540627"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.705159 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.707162 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" event={"ID":"9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843","Type":"ContainerStarted","Data":"8419e03ce86c0c8b0954da2e8026f140ef941a7dc05d7e302f102e815acbaf0b"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.707675 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.709248 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"29b3aa92-5b12-457c-b25a-27aa73aa8c37","Type":"ContainerStarted","Data":"53717b2788c50e7fc8566789007f56edf1a4b3f279853e33afa212d5b3c41f41"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.709414 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.713108 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" event={"ID":"ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753","Type":"ContainerStarted","Data":"8147b776f195c7287247a01cb950f849c8d8d96e638502483c328b231e443217"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.713504 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.717559 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b1105da5-f79a-4638-a2cd-9e9219b02682","Type":"ContainerStarted","Data":"58007fee41691f261b74b58d787ee71d508b9487e21656a093d82f8f6e30133a"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.717685 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.719684 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"6660c016-1faa-43e2-904c-3e8db37f6b3d","Type":"ContainerStarted","Data":"ec82251922dfd17737439e10c81361725f374bcc17cbb3bc4efc9778e0c6aa15"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.719840 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.721174 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" event={"ID":"aebf8b18-099f-4bfe-88ce-a34461bb4b51","Type":"ContainerStarted","Data":"35bb99eebe5f13aa2e494d5878102aea1ba21711bbf5e617ef10be0cf520960e"} Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.730952 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" podStartSLOduration=2.039818081 podStartE2EDuration="5.730938337s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:13.128816499 +0000 UTC m=+826.506167441" lastFinishedPulling="2026-02-18 00:39:16.819936715 +0000 UTC m=+830.197287697" observedRunningTime="2026-02-18 00:39:17.728786376 +0000 UTC m=+831.106137328" watchObservedRunningTime="2026-02-18 00:39:17.730938337 +0000 UTC m=+831.108289299" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.751428 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" podStartSLOduration=1.8324462000000001 podStartE2EDuration="5.751381978s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:12.901432467 +0000 UTC m=+826.278783409" lastFinishedPulling="2026-02-18 00:39:16.820368195 +0000 UTC m=+830.197719187" observedRunningTime="2026-02-18 00:39:17.748651014 +0000 UTC m=+831.126001966" watchObservedRunningTime="2026-02-18 00:39:17.751381978 +0000 UTC m=+831.128732920" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.782920 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.0390687 podStartE2EDuration="5.78290129s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:14.076057654 +0000 UTC m=+827.453408596" lastFinishedPulling="2026-02-18 00:39:16.819890234 +0000 UTC m=+830.197241186" observedRunningTime="2026-02-18 00:39:17.775362683 +0000 UTC m=+831.152713635" watchObservedRunningTime="2026-02-18 00:39:17.78290129 +0000 UTC m=+831.160252242" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.797837 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" podStartSLOduration=2.5327261549999998 podStartE2EDuration="6.797815001s" podCreationTimestamp="2026-02-18 00:39:11 +0000 UTC" firstStartedPulling="2026-02-18 00:39:12.614661537 +0000 UTC m=+825.992012479" lastFinishedPulling="2026-02-18 00:39:16.879750383 +0000 UTC m=+830.257101325" observedRunningTime="2026-02-18 00:39:17.795050896 +0000 UTC m=+831.172401848" watchObservedRunningTime="2026-02-18 00:39:17.797815001 +0000 UTC m=+831.175165953" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.818437 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.005353427 podStartE2EDuration="5.818406116s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:13.991541095 +0000 UTC m=+827.368892037" lastFinishedPulling="2026-02-18 00:39:16.804593784 +0000 UTC m=+830.181944726" observedRunningTime="2026-02-18 00:39:17.817672908 +0000 UTC m=+831.195023870" watchObservedRunningTime="2026-02-18 00:39:17.818406116 +0000 UTC m=+831.195757098" Feb 18 00:39:17 crc kubenswrapper[4847]: I0218 00:39:17.846571 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.418423458 podStartE2EDuration="5.845883472s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:14.334942407 +0000 UTC m=+827.712293339" lastFinishedPulling="2026-02-18 00:39:16.762402411 +0000 UTC m=+830.139753353" observedRunningTime="2026-02-18 00:39:17.841379386 +0000 UTC m=+831.218730338" watchObservedRunningTime="2026-02-18 00:39:17.845883472 +0000 UTC m=+831.223234424" Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.738765 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" event={"ID":"777cf1df-2302-473d-87b1-893df3304f21","Type":"ContainerStarted","Data":"1d723a6780eaa5676e194ea8c829809681baf75dcba50ce46780aeb621bac7d0"} Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.740228 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.741263 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" event={"ID":"aebf8b18-099f-4bfe-88ce-a34461bb4b51","Type":"ContainerStarted","Data":"b7142908056624770ba36287e74b21f8bdb4ae29dd1be3251b13eaf13249edc0"} Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.752811 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.771073 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" podStartSLOduration=2.21571603 podStartE2EDuration="7.771051603s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:13.633629111 +0000 UTC m=+827.010980063" lastFinishedPulling="2026-02-18 00:39:19.188964694 +0000 UTC m=+832.566315636" observedRunningTime="2026-02-18 00:39:19.766029035 +0000 UTC m=+833.143379987" watchObservedRunningTime="2026-02-18 00:39:19.771051603 +0000 UTC m=+833.148402555" Feb 18 00:39:19 crc kubenswrapper[4847]: I0218 00:39:19.852481 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" podStartSLOduration=1.957219536 podStartE2EDuration="7.8524568s" podCreationTimestamp="2026-02-18 00:39:12 +0000 UTC" firstStartedPulling="2026-02-18 00:39:13.302459976 +0000 UTC m=+826.679810918" lastFinishedPulling="2026-02-18 00:39:19.19769721 +0000 UTC m=+832.575048182" observedRunningTime="2026-02-18 00:39:19.849172912 +0000 UTC m=+833.226523894" watchObservedRunningTime="2026-02-18 00:39:19.8524568 +0000 UTC m=+833.229807782" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.750905 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.750967 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.750982 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.766808 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.768812 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9c654d8fb-tcxtw" Feb 18 00:39:20 crc kubenswrapper[4847]: I0218 00:39:20.768890 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-9c654d8fb-r2v6d" Feb 18 00:39:32 crc kubenswrapper[4847]: I0218 00:39:32.348720 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-x76k8" Feb 18 00:39:32 crc kubenswrapper[4847]: I0218 00:39:32.642998 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-wnr8f" Feb 18 00:39:32 crc kubenswrapper[4847]: I0218 00:39:32.665969 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-wsvv2" Feb 18 00:39:33 crc kubenswrapper[4847]: I0218 00:39:33.679202 4847 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 00:39:33 crc kubenswrapper[4847]: I0218 00:39:33.679296 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6660c016-1faa-43e2-904c-3e8db37f6b3d" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:39:33 crc kubenswrapper[4847]: I0218 00:39:33.684814 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 18 00:39:33 crc kubenswrapper[4847]: I0218 00:39:33.779531 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 18 00:39:43 crc kubenswrapper[4847]: I0218 00:39:43.678798 4847 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 18 00:39:43 crc kubenswrapper[4847]: I0218 00:39:43.679578 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6660c016-1faa-43e2-904c-3e8db37f6b3d" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:39:53 crc kubenswrapper[4847]: I0218 00:39:53.677723 4847 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 00:39:53 crc kubenswrapper[4847]: I0218 00:39:53.678367 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6660c016-1faa-43e2-904c-3e8db37f6b3d" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:40:03 crc kubenswrapper[4847]: I0218 00:40:03.675856 4847 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 18 00:40:03 crc kubenswrapper[4847]: I0218 00:40:03.676508 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="6660c016-1faa-43e2-904c-3e8db37f6b3d" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:40:13 crc kubenswrapper[4847]: I0218 00:40:13.681994 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 18 00:40:23 crc kubenswrapper[4847]: I0218 00:40:23.491526 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:40:23 crc kubenswrapper[4847]: I0218 00:40:23.492450 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.312053 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-l664b"] Feb 18 00:40:31 crc kubenswrapper[4847]: E0218 00:40:31.314138 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="extract-content" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.314172 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="extract-content" Feb 18 00:40:31 crc kubenswrapper[4847]: E0218 00:40:31.314204 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="extract-utilities" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.314218 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="extract-utilities" Feb 18 00:40:31 crc kubenswrapper[4847]: E0218 00:40:31.314249 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="registry-server" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.314264 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="registry-server" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.314484 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f641c3f-2a1e-4023-a8b6-e476eaab95e5" containerName="registry-server" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.315333 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.318308 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.318518 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-w947b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.319522 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.319635 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.320437 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.340323 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.342730 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-l664b"] Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.457828 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbfz5\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.457894 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458011 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458059 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458106 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458127 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458151 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458194 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458272 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458337 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.458424 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.474413 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-l664b"] Feb 18 00:40:31 crc kubenswrapper[4847]: E0218 00:40:31.474947 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zbfz5 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-l664b" podUID="766b34e0-a6f6-477f-a335-dee0e718c2f3" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559468 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559534 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559559 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559594 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559659 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559712 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559756 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559800 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559822 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbfz5\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559845 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.559890 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.560073 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.561073 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.561099 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.561126 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.561720 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.567040 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.567261 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.580310 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbfz5\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.583312 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.583651 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:31 crc kubenswrapper[4847]: I0218 00:40:31.583985 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver\") pod \"collector-l664b\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " pod="openshift-logging/collector-l664b" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.321855 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-l664b" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.336976 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-l664b" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.475119 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.475258 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476003 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config" (OuterVolumeSpecName: "config") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476312 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476413 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476528 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476725 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476823 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476870 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbfz5\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476931 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.476963 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.477021 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token\") pod \"766b34e0-a6f6-477f-a335-dee0e718c2f3\" (UID: \"766b34e0-a6f6-477f-a335-dee0e718c2f3\") " Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.477363 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.477332 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir" (OuterVolumeSpecName: "datadir") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.477957 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.478017 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.478034 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.478701 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.482580 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics" (OuterVolumeSpecName: "metrics") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.482961 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token" (OuterVolumeSpecName: "collector-token") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.483992 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5" (OuterVolumeSpecName: "kube-api-access-zbfz5") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "kube-api-access-zbfz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.484892 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token" (OuterVolumeSpecName: "sa-token") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.488880 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp" (OuterVolumeSpecName: "tmp") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.495840 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "766b34e0-a6f6-477f-a335-dee0e718c2f3" (UID: "766b34e0-a6f6-477f-a335-dee0e718c2f3"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579751 4847 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579820 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbfz5\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-kube-api-access-zbfz5\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579840 4847 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/766b34e0-a6f6-477f-a335-dee0e718c2f3-datadir\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579860 4847 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/766b34e0-a6f6-477f-a335-dee0e718c2f3-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579879 4847 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/766b34e0-a6f6-477f-a335-dee0e718c2f3-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579894 4847 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579912 4847 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579929 4847 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/766b34e0-a6f6-477f-a335-dee0e718c2f3-collector-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:32 crc kubenswrapper[4847]: I0218 00:40:32.579949 4847 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/766b34e0-a6f6-477f-a335-dee0e718c2f3-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.329446 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-l664b" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.394741 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-l664b"] Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.416856 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-l664b"] Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.419587 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-m899v"] Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.420303 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.424535 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.425074 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.425530 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-w947b" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.427033 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.431543 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.435664 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.441012 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-m899v"] Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498217 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-syslog-receiver\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498272 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6hvc\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-kube-api-access-g6hvc\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498297 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-entrypoint\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498346 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-sa-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498373 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7f274356-4622-4bbe-ad54-196514afaa20-datadir\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498398 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-trusted-ca\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498436 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config-openshift-service-cacrt\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498463 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-metrics\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498500 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498536 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f274356-4622-4bbe-ad54-196514afaa20-tmp\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.498571 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600333 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f274356-4622-4bbe-ad54-196514afaa20-tmp\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600439 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600541 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-syslog-receiver\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600592 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6hvc\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-kube-api-access-g6hvc\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600706 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-entrypoint\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600792 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-sa-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600840 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7f274356-4622-4bbe-ad54-196514afaa20-datadir\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-trusted-ca\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.600950 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config-openshift-service-cacrt\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.601020 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-metrics\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.601088 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.602068 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/7f274356-4622-4bbe-ad54-196514afaa20-datadir\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.602595 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config-openshift-service-cacrt\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.603229 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-config\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.604042 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-entrypoint\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.604774 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f274356-4622-4bbe-ad54-196514afaa20-trusted-ca\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.605227 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.607086 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-metrics\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.607649 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7f274356-4622-4bbe-ad54-196514afaa20-tmp\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.614256 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/7f274356-4622-4bbe-ad54-196514afaa20-collector-syslog-receiver\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.638654 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6hvc\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-kube-api-access-g6hvc\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.640121 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/7f274356-4622-4bbe-ad54-196514afaa20-sa-token\") pod \"collector-m899v\" (UID: \"7f274356-4622-4bbe-ad54-196514afaa20\") " pod="openshift-logging/collector-m899v" Feb 18 00:40:33 crc kubenswrapper[4847]: I0218 00:40:33.736506 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-m899v" Feb 18 00:40:34 crc kubenswrapper[4847]: I0218 00:40:34.262226 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-m899v"] Feb 18 00:40:34 crc kubenswrapper[4847]: I0218 00:40:34.338109 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-m899v" event={"ID":"7f274356-4622-4bbe-ad54-196514afaa20","Type":"ContainerStarted","Data":"5a5af27d74ba968d7976f072fb12df9acc9162dc7ee3825d387a2adb4688b81b"} Feb 18 00:40:35 crc kubenswrapper[4847]: I0218 00:40:35.416474 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766b34e0-a6f6-477f-a335-dee0e718c2f3" path="/var/lib/kubelet/pods/766b34e0-a6f6-477f-a335-dee0e718c2f3/volumes" Feb 18 00:40:42 crc kubenswrapper[4847]: I0218 00:40:42.400979 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-m899v" event={"ID":"7f274356-4622-4bbe-ad54-196514afaa20","Type":"ContainerStarted","Data":"110816756b9bcf80eef561495456d0dcfcec2cbf3b951d56718ea9cde8435aff"} Feb 18 00:40:42 crc kubenswrapper[4847]: I0218 00:40:42.437214 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-m899v" podStartSLOduration=1.964075318 podStartE2EDuration="9.437191701s" podCreationTimestamp="2026-02-18 00:40:33 +0000 UTC" firstStartedPulling="2026-02-18 00:40:34.274522463 +0000 UTC m=+907.651873445" lastFinishedPulling="2026-02-18 00:40:41.747638886 +0000 UTC m=+915.124989828" observedRunningTime="2026-02-18 00:40:42.431146603 +0000 UTC m=+915.808497585" watchObservedRunningTime="2026-02-18 00:40:42.437191701 +0000 UTC m=+915.814542693" Feb 18 00:40:53 crc kubenswrapper[4847]: I0218 00:40:53.492243 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:40:53 crc kubenswrapper[4847]: I0218 00:40:53.493230 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.389521 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd"] Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.391218 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.394007 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.410821 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd"] Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.450295 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wcm4\" (UniqueName: \"kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.450429 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.450460 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.551852 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.551923 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.552036 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wcm4\" (UniqueName: \"kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.552712 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.552723 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.578219 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wcm4\" (UniqueName: \"kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:04 crc kubenswrapper[4847]: I0218 00:41:04.713933 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:05 crc kubenswrapper[4847]: I0218 00:41:05.185481 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd"] Feb 18 00:41:05 crc kubenswrapper[4847]: W0218 00:41:05.200716 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0059114_96c2_4ba4_9d6f_310d7e0a9372.slice/crio-63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d WatchSource:0}: Error finding container 63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d: Status 404 returned error can't find the container with id 63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d Feb 18 00:41:05 crc kubenswrapper[4847]: I0218 00:41:05.597185 4847 generic.go:334] "Generic (PLEG): container finished" podID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerID="cb46ee1141371c6c665e23d6c05a52572231efe223313fd80024d362b18e0b89" exitCode=0 Feb 18 00:41:05 crc kubenswrapper[4847]: I0218 00:41:05.597240 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" event={"ID":"b0059114-96c2-4ba4-9d6f-310d7e0a9372","Type":"ContainerDied","Data":"cb46ee1141371c6c665e23d6c05a52572231efe223313fd80024d362b18e0b89"} Feb 18 00:41:05 crc kubenswrapper[4847]: I0218 00:41:05.597291 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" event={"ID":"b0059114-96c2-4ba4-9d6f-310d7e0a9372","Type":"ContainerStarted","Data":"63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d"} Feb 18 00:41:08 crc kubenswrapper[4847]: I0218 00:41:08.636313 4847 generic.go:334] "Generic (PLEG): container finished" podID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerID="48ff04d5efe976d6568a7918a4611b940a8252b3a0832912f46f0276c4397dda" exitCode=0 Feb 18 00:41:08 crc kubenswrapper[4847]: I0218 00:41:08.636353 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" event={"ID":"b0059114-96c2-4ba4-9d6f-310d7e0a9372","Type":"ContainerDied","Data":"48ff04d5efe976d6568a7918a4611b940a8252b3a0832912f46f0276c4397dda"} Feb 18 00:41:09 crc kubenswrapper[4847]: I0218 00:41:09.649923 4847 generic.go:334] "Generic (PLEG): container finished" podID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerID="bafd1dac1f2e178c5ad68745b2cf6d60bcb3b4b3711524e3a53be06a74c14336" exitCode=0 Feb 18 00:41:09 crc kubenswrapper[4847]: I0218 00:41:09.650048 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" event={"ID":"b0059114-96c2-4ba4-9d6f-310d7e0a9372","Type":"ContainerDied","Data":"bafd1dac1f2e178c5ad68745b2cf6d60bcb3b4b3711524e3a53be06a74c14336"} Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.046899 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.159106 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util\") pod \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.159267 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wcm4\" (UniqueName: \"kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4\") pod \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.159389 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle\") pod \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\" (UID: \"b0059114-96c2-4ba4-9d6f-310d7e0a9372\") " Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.160386 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle" (OuterVolumeSpecName: "bundle") pod "b0059114-96c2-4ba4-9d6f-310d7e0a9372" (UID: "b0059114-96c2-4ba4-9d6f-310d7e0a9372"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.165200 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4" (OuterVolumeSpecName: "kube-api-access-8wcm4") pod "b0059114-96c2-4ba4-9d6f-310d7e0a9372" (UID: "b0059114-96c2-4ba4-9d6f-310d7e0a9372"). InnerVolumeSpecName "kube-api-access-8wcm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.182673 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util" (OuterVolumeSpecName: "util") pod "b0059114-96c2-4ba4-9d6f-310d7e0a9372" (UID: "b0059114-96c2-4ba4-9d6f-310d7e0a9372"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.261043 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wcm4\" (UniqueName: \"kubernetes.io/projected/b0059114-96c2-4ba4-9d6f-310d7e0a9372-kube-api-access-8wcm4\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.261111 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.261135 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b0059114-96c2-4ba4-9d6f-310d7e0a9372-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.668519 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" event={"ID":"b0059114-96c2-4ba4-9d6f-310d7e0a9372","Type":"ContainerDied","Data":"63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d"} Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.668572 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63eae7efe670d88c3a392df3c468074746fb9fa23b692f78d0b1b7ab6073071d" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.668732 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.755542 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:11 crc kubenswrapper[4847]: E0218 00:41:11.756006 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="extract" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.756034 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="extract" Feb 18 00:41:11 crc kubenswrapper[4847]: E0218 00:41:11.756066 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="pull" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.756087 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="pull" Feb 18 00:41:11 crc kubenswrapper[4847]: E0218 00:41:11.756121 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="util" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.756135 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="util" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.756486 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0059114-96c2-4ba4-9d6f-310d7e0a9372" containerName="extract" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.758296 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.773395 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.870632 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmz4r\" (UniqueName: \"kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.870682 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.870995 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.972624 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.972688 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmz4r\" (UniqueName: \"kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.972709 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.973355 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.973509 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:11 crc kubenswrapper[4847]: I0218 00:41:11.997155 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmz4r\" (UniqueName: \"kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r\") pod \"certified-operators-78qcj\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:12 crc kubenswrapper[4847]: I0218 00:41:12.078190 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:12 crc kubenswrapper[4847]: I0218 00:41:12.521354 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:12 crc kubenswrapper[4847]: I0218 00:41:12.675185 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerStarted","Data":"18b9392f91230691c3a544a3a0ec29ceb915e66deb7db14dc5144c30632a3b79"} Feb 18 00:41:13 crc kubenswrapper[4847]: I0218 00:41:13.685619 4847 generic.go:334] "Generic (PLEG): container finished" podID="6a35538f-d541-4b9b-9774-84863b939bd6" containerID="b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa" exitCode=0 Feb 18 00:41:13 crc kubenswrapper[4847]: I0218 00:41:13.685680 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerDied","Data":"b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa"} Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.690818 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7scf"] Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.692427 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.694867 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.696549 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.696777 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rccpg" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.702983 4847 generic.go:334] "Generic (PLEG): container finished" podID="6a35538f-d541-4b9b-9774-84863b939bd6" containerID="07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54" exitCode=0 Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.703039 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerDied","Data":"07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54"} Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.721091 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7scf"] Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.840498 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fccn5\" (UniqueName: \"kubernetes.io/projected/985d9311-59df-4b29-9d4c-0103f801ed1c-kube-api-access-fccn5\") pod \"nmstate-operator-694c9596b7-p7scf\" (UID: \"985d9311-59df-4b29-9d4c-0103f801ed1c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.942481 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fccn5\" (UniqueName: \"kubernetes.io/projected/985d9311-59df-4b29-9d4c-0103f801ed1c-kube-api-access-fccn5\") pod \"nmstate-operator-694c9596b7-p7scf\" (UID: \"985d9311-59df-4b29-9d4c-0103f801ed1c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" Feb 18 00:41:15 crc kubenswrapper[4847]: I0218 00:41:15.966151 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fccn5\" (UniqueName: \"kubernetes.io/projected/985d9311-59df-4b29-9d4c-0103f801ed1c-kube-api-access-fccn5\") pod \"nmstate-operator-694c9596b7-p7scf\" (UID: \"985d9311-59df-4b29-9d4c-0103f801ed1c\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" Feb 18 00:41:16 crc kubenswrapper[4847]: I0218 00:41:16.018975 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" Feb 18 00:41:16 crc kubenswrapper[4847]: I0218 00:41:16.283120 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-p7scf"] Feb 18 00:41:16 crc kubenswrapper[4847]: I0218 00:41:16.710825 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerStarted","Data":"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1"} Feb 18 00:41:16 crc kubenswrapper[4847]: I0218 00:41:16.736710 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" event={"ID":"985d9311-59df-4b29-9d4c-0103f801ed1c","Type":"ContainerStarted","Data":"4fe00866292980d09bfcbcbe0067cec8572fd4271a5205ca8f1d9a8158b719b6"} Feb 18 00:41:16 crc kubenswrapper[4847]: I0218 00:41:16.741660 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-78qcj" podStartSLOduration=3.3020718479999998 podStartE2EDuration="5.741640798s" podCreationTimestamp="2026-02-18 00:41:11 +0000 UTC" firstStartedPulling="2026-02-18 00:41:13.6877837 +0000 UTC m=+947.065134642" lastFinishedPulling="2026-02-18 00:41:16.12735265 +0000 UTC m=+949.504703592" observedRunningTime="2026-02-18 00:41:16.735411826 +0000 UTC m=+950.112762768" watchObservedRunningTime="2026-02-18 00:41:16.741640798 +0000 UTC m=+950.118991740" Feb 18 00:41:19 crc kubenswrapper[4847]: I0218 00:41:19.761833 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" event={"ID":"985d9311-59df-4b29-9d4c-0103f801ed1c","Type":"ContainerStarted","Data":"d44a304a14d4c3264bc4ec8be60fd85482890d6754a7ba8053bccff7fcf05722"} Feb 18 00:41:19 crc kubenswrapper[4847]: I0218 00:41:19.795838 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-p7scf" podStartSLOduration=2.377662766 podStartE2EDuration="4.795811903s" podCreationTimestamp="2026-02-18 00:41:15 +0000 UTC" firstStartedPulling="2026-02-18 00:41:16.293056166 +0000 UTC m=+949.670407108" lastFinishedPulling="2026-02-18 00:41:18.711205303 +0000 UTC m=+952.088556245" observedRunningTime="2026-02-18 00:41:19.78992962 +0000 UTC m=+953.167280572" watchObservedRunningTime="2026-02-18 00:41:19.795811903 +0000 UTC m=+953.173162855" Feb 18 00:41:22 crc kubenswrapper[4847]: I0218 00:41:22.079172 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:22 crc kubenswrapper[4847]: I0218 00:41:22.079793 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:22 crc kubenswrapper[4847]: I0218 00:41:22.156266 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:22 crc kubenswrapper[4847]: I0218 00:41:22.833291 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.134727 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.492044 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.492124 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.492170 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.492926 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.492988 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa" gracePeriod=600 Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.794914 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa" exitCode=0 Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.795006 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa"} Feb 18 00:41:23 crc kubenswrapper[4847]: I0218 00:41:23.795942 4847 scope.go:117] "RemoveContainer" containerID="7e14399c572be0bcab6145068e4196c5aff977a8de62be4c5222c60a21f3d43d" Feb 18 00:41:24 crc kubenswrapper[4847]: I0218 00:41:24.809116 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186"} Feb 18 00:41:24 crc kubenswrapper[4847]: I0218 00:41:24.809666 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-78qcj" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="registry-server" containerID="cri-o://d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1" gracePeriod=2 Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.432668 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.518848 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp"] Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.523319 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="extract-utilities" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.523353 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="extract-utilities" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.523371 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="registry-server" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.523378 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="registry-server" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.523390 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="extract-content" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.523396 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="extract-content" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.523619 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" containerName="registry-server" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.524329 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.528645 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xtg4m" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.536498 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.542016 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.543103 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.544808 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.550826 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.561529 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sffsn"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.562450 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.601663 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities\") pod \"6a35538f-d541-4b9b-9774-84863b939bd6\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602058 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content\") pod \"6a35538f-d541-4b9b-9774-84863b939bd6\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602187 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmz4r\" (UniqueName: \"kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r\") pod \"6a35538f-d541-4b9b-9774-84863b939bd6\" (UID: \"6a35538f-d541-4b9b-9774-84863b939bd6\") " Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602346 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-nmstate-lock\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602477 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktcd\" (UniqueName: \"kubernetes.io/projected/236a285b-dac4-49c1-9cf1-f76b5b0f6a79-kube-api-access-nktcd\") pod \"nmstate-metrics-58c85c668d-c6rqp\" (UID: \"236a285b-dac4-49c1-9cf1-f76b5b0f6a79\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602588 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-dbus-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.602727 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-ovs-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.603089 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.603411 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgrz\" (UniqueName: \"kubernetes.io/projected/818015c7-8c32-4aff-9723-67548354380b-kube-api-access-2qgrz\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.603440 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2cd\" (UniqueName: \"kubernetes.io/projected/637e4133-8cdb-4098-bd6a-55cb7ce569b4-kube-api-access-wt2cd\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.604483 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities" (OuterVolumeSpecName: "utilities") pod "6a35538f-d541-4b9b-9774-84863b939bd6" (UID: "6a35538f-d541-4b9b-9774-84863b939bd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.609642 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r" (OuterVolumeSpecName: "kube-api-access-zmz4r") pod "6a35538f-d541-4b9b-9774-84863b939bd6" (UID: "6a35538f-d541-4b9b-9774-84863b939bd6"). InnerVolumeSpecName "kube-api-access-zmz4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.625469 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.628121 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.632172 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.632190 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.632493 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ft9sp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.641489 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.679928 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a35538f-d541-4b9b-9774-84863b939bd6" (UID: "6a35538f-d541-4b9b-9774-84863b939bd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.705491 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nktcd\" (UniqueName: \"kubernetes.io/projected/236a285b-dac4-49c1-9cf1-f76b5b0f6a79-kube-api-access-nktcd\") pod \"nmstate-metrics-58c85c668d-c6rqp\" (UID: \"236a285b-dac4-49c1-9cf1-f76b5b0f6a79\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.705795 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-dbus-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.705897 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.705984 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7dw\" (UniqueName: \"kubernetes.io/projected/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-kube-api-access-ln7dw\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706087 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-ovs-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706200 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706298 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706388 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgrz\" (UniqueName: \"kubernetes.io/projected/818015c7-8c32-4aff-9723-67548354380b-kube-api-access-2qgrz\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706469 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2cd\" (UniqueName: \"kubernetes.io/projected/637e4133-8cdb-4098-bd6a-55cb7ce569b4-kube-api-access-wt2cd\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706560 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-nmstate-lock\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706694 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706757 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmz4r\" (UniqueName: \"kubernetes.io/projected/6a35538f-d541-4b9b-9774-84863b939bd6-kube-api-access-zmz4r\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706813 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a35538f-d541-4b9b-9774-84863b939bd6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706921 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-nmstate-lock\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706950 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-dbus-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.706999 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/818015c7-8c32-4aff-9723-67548354380b-ovs-socket\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.707192 4847 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.707255 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair podName:637e4133-8cdb-4098-bd6a-55cb7ce569b4 nodeName:}" failed. No retries permitted until 2026-02-18 00:41:26.207236366 +0000 UTC m=+959.584587498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair") pod "nmstate-webhook-866bcb46dc-k57ql" (UID: "637e4133-8cdb-4098-bd6a-55cb7ce569b4") : secret "openshift-nmstate-webhook" not found Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.724428 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktcd\" (UniqueName: \"kubernetes.io/projected/236a285b-dac4-49c1-9cf1-f76b5b0f6a79-kube-api-access-nktcd\") pod \"nmstate-metrics-58c85c668d-c6rqp\" (UID: \"236a285b-dac4-49c1-9cf1-f76b5b0f6a79\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.725973 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgrz\" (UniqueName: \"kubernetes.io/projected/818015c7-8c32-4aff-9723-67548354380b-kube-api-access-2qgrz\") pod \"nmstate-handler-sffsn\" (UID: \"818015c7-8c32-4aff-9723-67548354380b\") " pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.743694 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2cd\" (UniqueName: \"kubernetes.io/projected/637e4133-8cdb-4098-bd6a-55cb7ce569b4-kube-api-access-wt2cd\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.808217 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.808280 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7dw\" (UniqueName: \"kubernetes.io/projected/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-kube-api-access-ln7dw\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.808316 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.809556 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.813763 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.818978 4847 generic.go:334] "Generic (PLEG): container finished" podID="6a35538f-d541-4b9b-9774-84863b939bd6" containerID="d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1" exitCode=0 Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.819930 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78qcj" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.827843 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerDied","Data":"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1"} Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.827907 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78qcj" event={"ID":"6a35538f-d541-4b9b-9774-84863b939bd6","Type":"ContainerDied","Data":"18b9392f91230691c3a544a3a0ec29ceb915e66deb7db14dc5144c30632a3b79"} Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.827927 4847 scope.go:117] "RemoveContainer" containerID="d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.837205 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7dw\" (UniqueName: \"kubernetes.io/projected/aeb7db89-c5aa-4675-aa0c-f4f6a34b109b-kube-api-access-ln7dw\") pod \"nmstate-console-plugin-5c78fc5d65-vjmwl\" (UID: \"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.848873 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.849744 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.850520 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.851080 4847 scope.go:117] "RemoveContainer" containerID="07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.857288 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.863514 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-78qcj"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.867931 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.884518 4847 scope.go:117] "RemoveContainer" containerID="b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.884968 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909321 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909371 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909409 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909451 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909521 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909562 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blx6\" (UniqueName: \"kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.909581 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.915687 4847 scope.go:117] "RemoveContainer" containerID="d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.916036 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1\": container with ID starting with d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1 not found: ID does not exist" containerID="d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.916065 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1"} err="failed to get container status \"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1\": rpc error: code = NotFound desc = could not find container \"d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1\": container with ID starting with d2c285fe0e992ecdeb01abcb2cb6434849fb1af046e1b2ed3465d3f2bbc885a1 not found: ID does not exist" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.916085 4847 scope.go:117] "RemoveContainer" containerID="07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.916323 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54\": container with ID starting with 07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54 not found: ID does not exist" containerID="07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.916342 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54"} err="failed to get container status \"07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54\": rpc error: code = NotFound desc = could not find container \"07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54\": container with ID starting with 07f05e188b12077824cf7acee07bee311a4bb27344c029c421fe7cca60cdaa54 not found: ID does not exist" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.916353 4847 scope.go:117] "RemoveContainer" containerID="b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa" Feb 18 00:41:25 crc kubenswrapper[4847]: E0218 00:41:25.916583 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa\": container with ID starting with b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa not found: ID does not exist" containerID="b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa" Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.916621 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa"} err="failed to get container status \"b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa\": rpc error: code = NotFound desc = could not find container \"b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa\": container with ID starting with b06d6c77f23779e3439e65ed84e436cf3f6ba10e1e806b4ff044199310a9e8aa not found: ID does not exist" Feb 18 00:41:25 crc kubenswrapper[4847]: W0218 00:41:25.925196 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818015c7_8c32_4aff_9723_67548354380b.slice/crio-dd069f136f1c22907b0f62529af737c211ccc59215993c13a9cb67dce591a81f WatchSource:0}: Error finding container dd069f136f1c22907b0f62529af737c211ccc59215993c13a9cb67dce591a81f: Status 404 returned error can't find the container with id dd069f136f1c22907b0f62529af737c211ccc59215993c13a9cb67dce591a81f Feb 18 00:41:25 crc kubenswrapper[4847]: I0218 00:41:25.945721 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011056 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011118 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011153 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011203 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011276 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011313 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8blx6\" (UniqueName: \"kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.011335 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.013674 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.014014 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.014883 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.015104 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.032561 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.032706 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.035236 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8blx6\" (UniqueName: \"kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6\") pod \"console-68cc555589-wskw7\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.172768 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.214253 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.218553 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/637e4133-8cdb-4098-bd6a-55cb7ce569b4-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-k57ql\" (UID: \"637e4133-8cdb-4098-bd6a-55cb7ce569b4\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.292616 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp"] Feb 18 00:41:26 crc kubenswrapper[4847]: W0218 00:41:26.307523 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod236a285b_dac4_49c1_9cf1_f76b5b0f6a79.slice/crio-dc24931fae5d6f8ba80f52fbe36de2339dd1975e29b97adb4f5ffd866ace6970 WatchSource:0}: Error finding container dc24931fae5d6f8ba80f52fbe36de2339dd1975e29b97adb4f5ffd866ace6970: Status 404 returned error can't find the container with id dc24931fae5d6f8ba80f52fbe36de2339dd1975e29b97adb4f5ffd866ace6970 Feb 18 00:41:26 crc kubenswrapper[4847]: W0218 00:41:26.382486 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeb7db89_c5aa_4675_aa0c_f4f6a34b109b.slice/crio-edcf1252dddba97daea34c726efa88846c30a07092a7df7e668dd4a5c8e0f01c WatchSource:0}: Error finding container edcf1252dddba97daea34c726efa88846c30a07092a7df7e668dd4a5c8e0f01c: Status 404 returned error can't find the container with id edcf1252dddba97daea34c726efa88846c30a07092a7df7e668dd4a5c8e0f01c Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.383402 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl"] Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.467145 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.584909 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:41:26 crc kubenswrapper[4847]: W0218 00:41:26.595198 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79178e72_a62d_47ea_ba8c_7dfdf3171258.slice/crio-238e7763895802127a1b2692078b46ec6712c243cd612661f7556a5310fe0f5e WatchSource:0}: Error finding container 238e7763895802127a1b2692078b46ec6712c243cd612661f7556a5310fe0f5e: Status 404 returned error can't find the container with id 238e7763895802127a1b2692078b46ec6712c243cd612661f7556a5310fe0f5e Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.827399 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-wskw7" event={"ID":"79178e72-a62d-47ea-ba8c-7dfdf3171258","Type":"ContainerStarted","Data":"35855e15ee11fe5131c72f468378961facd97300c5e7c29f11b7ef4fa581684a"} Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.827663 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-wskw7" event={"ID":"79178e72-a62d-47ea-ba8c-7dfdf3171258","Type":"ContainerStarted","Data":"238e7763895802127a1b2692078b46ec6712c243cd612661f7556a5310fe0f5e"} Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.828394 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" event={"ID":"236a285b-dac4-49c1-9cf1-f76b5b0f6a79","Type":"ContainerStarted","Data":"dc24931fae5d6f8ba80f52fbe36de2339dd1975e29b97adb4f5ffd866ace6970"} Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.829401 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" event={"ID":"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b","Type":"ContainerStarted","Data":"edcf1252dddba97daea34c726efa88846c30a07092a7df7e668dd4a5c8e0f01c"} Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.831519 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sffsn" event={"ID":"818015c7-8c32-4aff-9723-67548354380b","Type":"ContainerStarted","Data":"dd069f136f1c22907b0f62529af737c211ccc59215993c13a9cb67dce591a81f"} Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.862651 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68cc555589-wskw7" podStartSLOduration=1.8626341549999998 podStartE2EDuration="1.862634155s" podCreationTimestamp="2026-02-18 00:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:41:26.859282853 +0000 UTC m=+960.236633795" watchObservedRunningTime="2026-02-18 00:41:26.862634155 +0000 UTC m=+960.239985097" Feb 18 00:41:26 crc kubenswrapper[4847]: I0218 00:41:26.953065 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql"] Feb 18 00:41:26 crc kubenswrapper[4847]: W0218 00:41:26.954877 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637e4133_8cdb_4098_bd6a_55cb7ce569b4.slice/crio-400269a88b3f7df1f6be80f134dcca244b933776e32cde10869d35398c192db1 WatchSource:0}: Error finding container 400269a88b3f7df1f6be80f134dcca244b933776e32cde10869d35398c192db1: Status 404 returned error can't find the container with id 400269a88b3f7df1f6be80f134dcca244b933776e32cde10869d35398c192db1 Feb 18 00:41:27 crc kubenswrapper[4847]: I0218 00:41:27.424818 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a35538f-d541-4b9b-9774-84863b939bd6" path="/var/lib/kubelet/pods/6a35538f-d541-4b9b-9774-84863b939bd6/volumes" Feb 18 00:41:27 crc kubenswrapper[4847]: I0218 00:41:27.839442 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" event={"ID":"637e4133-8cdb-4098-bd6a-55cb7ce569b4","Type":"ContainerStarted","Data":"400269a88b3f7df1f6be80f134dcca244b933776e32cde10869d35398c192db1"} Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.867221 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sffsn" event={"ID":"818015c7-8c32-4aff-9723-67548354380b","Type":"ContainerStarted","Data":"8286613ae9efbc3a6fa412f8eb3671a4bb7dd226cd01622c79296b8e84fdd3ba"} Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.867769 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.868894 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" event={"ID":"236a285b-dac4-49c1-9cf1-f76b5b0f6a79","Type":"ContainerStarted","Data":"3d647e2635a671a654df017a200012267819e4ebca7f2e571b2574aa21d97566"} Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.870089 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" event={"ID":"aeb7db89-c5aa-4675-aa0c-f4f6a34b109b","Type":"ContainerStarted","Data":"ed6cdbea03c2392c7b85c6f366421fb402e9db20602d613c1e87071aa79de21f"} Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.871531 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" event={"ID":"637e4133-8cdb-4098-bd6a-55cb7ce569b4","Type":"ContainerStarted","Data":"71e633bfb2e6cb9a74cc4170548844d0cf31222c41c5900fee9c3151cf684948"} Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.871701 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.886015 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sffsn" podStartSLOduration=1.546786684 podStartE2EDuration="5.885999673s" podCreationTimestamp="2026-02-18 00:41:25 +0000 UTC" firstStartedPulling="2026-02-18 00:41:25.929018441 +0000 UTC m=+959.306369383" lastFinishedPulling="2026-02-18 00:41:30.26823139 +0000 UTC m=+963.645582372" observedRunningTime="2026-02-18 00:41:30.882386214 +0000 UTC m=+964.259737196" watchObservedRunningTime="2026-02-18 00:41:30.885999673 +0000 UTC m=+964.263350615" Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.907946 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" podStartSLOduration=2.626892924 podStartE2EDuration="5.907922858s" podCreationTimestamp="2026-02-18 00:41:25 +0000 UTC" firstStartedPulling="2026-02-18 00:41:26.956937957 +0000 UTC m=+960.334288899" lastFinishedPulling="2026-02-18 00:41:30.237967891 +0000 UTC m=+963.615318833" observedRunningTime="2026-02-18 00:41:30.900988539 +0000 UTC m=+964.278339481" watchObservedRunningTime="2026-02-18 00:41:30.907922858 +0000 UTC m=+964.285273800" Feb 18 00:41:30 crc kubenswrapper[4847]: I0218 00:41:30.956341 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-vjmwl" podStartSLOduration=2.102648225 podStartE2EDuration="5.95632464s" podCreationTimestamp="2026-02-18 00:41:25 +0000 UTC" firstStartedPulling="2026-02-18 00:41:26.384630605 +0000 UTC m=+959.761981547" lastFinishedPulling="2026-02-18 00:41:30.23830702 +0000 UTC m=+963.615657962" observedRunningTime="2026-02-18 00:41:30.954528126 +0000 UTC m=+964.331879088" watchObservedRunningTime="2026-02-18 00:41:30.95632464 +0000 UTC m=+964.333675582" Feb 18 00:41:33 crc kubenswrapper[4847]: I0218 00:41:33.911247 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" event={"ID":"236a285b-dac4-49c1-9cf1-f76b5b0f6a79","Type":"ContainerStarted","Data":"e4702d2aac453fae53db09a234a00782ae4af6ac9a81ec054b36a2bffb026179"} Feb 18 00:41:33 crc kubenswrapper[4847]: I0218 00:41:33.943815 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c6rqp" podStartSLOduration=2.041487331 podStartE2EDuration="8.943803666s" podCreationTimestamp="2026-02-18 00:41:25 +0000 UTC" firstStartedPulling="2026-02-18 00:41:26.309627863 +0000 UTC m=+959.686978805" lastFinishedPulling="2026-02-18 00:41:33.211944188 +0000 UTC m=+966.589295140" observedRunningTime="2026-02-18 00:41:33.941204382 +0000 UTC m=+967.318555324" watchObservedRunningTime="2026-02-18 00:41:33.943803666 +0000 UTC m=+967.321154608" Feb 18 00:41:35 crc kubenswrapper[4847]: I0218 00:41:35.924196 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sffsn" Feb 18 00:41:36 crc kubenswrapper[4847]: I0218 00:41:36.174089 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:36 crc kubenswrapper[4847]: I0218 00:41:36.174193 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:36 crc kubenswrapper[4847]: I0218 00:41:36.182541 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:36 crc kubenswrapper[4847]: I0218 00:41:36.943907 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:41:37 crc kubenswrapper[4847]: I0218 00:41:37.013820 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:41:46 crc kubenswrapper[4847]: I0218 00:41:46.475683 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-k57ql" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.099276 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-tfxw5" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" containerID="cri-o://c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768" gracePeriod=15 Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.611264 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tfxw5_b4d13f62-c469-4050-8974-8ccf32bf0bce/console/0.log" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.611706 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761017 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761075 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6hwx\" (UniqueName: \"kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761108 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761180 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761198 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761236 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.761312 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config\") pod \"b4d13f62-c469-4050-8974-8ccf32bf0bce\" (UID: \"b4d13f62-c469-4050-8974-8ccf32bf0bce\") " Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.762319 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config" (OuterVolumeSpecName: "console-config") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.762521 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.762545 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.762711 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca" (OuterVolumeSpecName: "service-ca") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.769451 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx" (OuterVolumeSpecName: "kube-api-access-s6hwx") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "kube-api-access-s6hwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.769633 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.770543 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b4d13f62-c469-4050-8974-8ccf32bf0bce" (UID: "b4d13f62-c469-4050-8974-8ccf32bf0bce"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862696 4847 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862961 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862972 4847 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862981 4847 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862990 4847 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b4d13f62-c469-4050-8974-8ccf32bf0bce-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.862999 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6hwx\" (UniqueName: \"kubernetes.io/projected/b4d13f62-c469-4050-8974-8ccf32bf0bce-kube-api-access-s6hwx\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:02 crc kubenswrapper[4847]: I0218 00:42:02.863008 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4d13f62-c469-4050-8974-8ccf32bf0bce-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175081 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-tfxw5_b4d13f62-c469-4050-8974-8ccf32bf0bce/console/0.log" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175142 4847 generic.go:334] "Generic (PLEG): container finished" podID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerID="c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768" exitCode=2 Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175175 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tfxw5" event={"ID":"b4d13f62-c469-4050-8974-8ccf32bf0bce","Type":"ContainerDied","Data":"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768"} Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175210 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-tfxw5" event={"ID":"b4d13f62-c469-4050-8974-8ccf32bf0bce","Type":"ContainerDied","Data":"bae068a4e4bb7fc552f27d8d23090f2bc1a1640c3c5e533a7574fd19bbfab549"} Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175231 4847 scope.go:117] "RemoveContainer" containerID="c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.175246 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-tfxw5" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.217706 4847 scope.go:117] "RemoveContainer" containerID="c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768" Feb 18 00:42:03 crc kubenswrapper[4847]: E0218 00:42:03.221427 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768\": container with ID starting with c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768 not found: ID does not exist" containerID="c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.221493 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768"} err="failed to get container status \"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768\": rpc error: code = NotFound desc = could not find container \"c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768\": container with ID starting with c04ed3fb1a5eb012e6bd85313ec04cebcb1925c7dd87b5d6aced187869d79768 not found: ID does not exist" Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.224190 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.228545 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-tfxw5"] Feb 18 00:42:03 crc kubenswrapper[4847]: I0218 00:42:03.416682 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" path="/var/lib/kubelet/pods/b4d13f62-c469-4050-8974-8ccf32bf0bce/volumes" Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.902515 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2"] Feb 18 00:42:08 crc kubenswrapper[4847]: E0218 00:42:08.903410 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.903427 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.903643 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d13f62-c469-4050-8974-8ccf32bf0bce" containerName="console" Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.904939 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.912214 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2"] Feb 18 00:42:08 crc kubenswrapper[4847]: I0218 00:42:08.913027 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.069686 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.069921 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.069964 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrt7\" (UniqueName: \"kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.171858 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.171936 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrt7\" (UniqueName: \"kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.172066 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.172350 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.172425 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.200277 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrt7\" (UniqueName: \"kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.222833 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:09 crc kubenswrapper[4847]: I0218 00:42:09.788919 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2"] Feb 18 00:42:10 crc kubenswrapper[4847]: I0218 00:42:10.257793 4847 generic.go:334] "Generic (PLEG): container finished" podID="a04b076e-790c-44cc-8aab-b77901dceadb" containerID="7a9eabbc1be7963d2541d9a6203d7f7cec119c66e9beaa52d223b2b619e79918" exitCode=0 Feb 18 00:42:10 crc kubenswrapper[4847]: I0218 00:42:10.257894 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" event={"ID":"a04b076e-790c-44cc-8aab-b77901dceadb","Type":"ContainerDied","Data":"7a9eabbc1be7963d2541d9a6203d7f7cec119c66e9beaa52d223b2b619e79918"} Feb 18 00:42:10 crc kubenswrapper[4847]: I0218 00:42:10.257994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" event={"ID":"a04b076e-790c-44cc-8aab-b77901dceadb","Type":"ContainerStarted","Data":"bdfa820fc870b1a97f971829fa3d830a24b921ec1c8242c9cb1fdff030b1648b"} Feb 18 00:42:10 crc kubenswrapper[4847]: I0218 00:42:10.260392 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:42:12 crc kubenswrapper[4847]: I0218 00:42:12.278929 4847 generic.go:334] "Generic (PLEG): container finished" podID="a04b076e-790c-44cc-8aab-b77901dceadb" containerID="8093f34de846108df1dcc622ae20d634caaa8d9795127343b95592c5f8b52007" exitCode=0 Feb 18 00:42:12 crc kubenswrapper[4847]: I0218 00:42:12.279044 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" event={"ID":"a04b076e-790c-44cc-8aab-b77901dceadb","Type":"ContainerDied","Data":"8093f34de846108df1dcc622ae20d634caaa8d9795127343b95592c5f8b52007"} Feb 18 00:42:13 crc kubenswrapper[4847]: I0218 00:42:13.300825 4847 generic.go:334] "Generic (PLEG): container finished" podID="a04b076e-790c-44cc-8aab-b77901dceadb" containerID="7312890903ee2f1f9516cb6fd0237e7b5fed900086a3e2935310cd11a3671b97" exitCode=0 Feb 18 00:42:13 crc kubenswrapper[4847]: I0218 00:42:13.301220 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" event={"ID":"a04b076e-790c-44cc-8aab-b77901dceadb","Type":"ContainerDied","Data":"7312890903ee2f1f9516cb6fd0237e7b5fed900086a3e2935310cd11a3671b97"} Feb 18 00:42:13 crc kubenswrapper[4847]: E0218 00:42:13.366228 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/NetworkManager-dispatcher.service\": RecentStats: unable to find data in memory cache]" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.728974 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.872994 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle\") pod \"a04b076e-790c-44cc-8aab-b77901dceadb\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.873149 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcrt7\" (UniqueName: \"kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7\") pod \"a04b076e-790c-44cc-8aab-b77901dceadb\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.873240 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util\") pod \"a04b076e-790c-44cc-8aab-b77901dceadb\" (UID: \"a04b076e-790c-44cc-8aab-b77901dceadb\") " Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.874424 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle" (OuterVolumeSpecName: "bundle") pod "a04b076e-790c-44cc-8aab-b77901dceadb" (UID: "a04b076e-790c-44cc-8aab-b77901dceadb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.881725 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7" (OuterVolumeSpecName: "kube-api-access-dcrt7") pod "a04b076e-790c-44cc-8aab-b77901dceadb" (UID: "a04b076e-790c-44cc-8aab-b77901dceadb"). InnerVolumeSpecName "kube-api-access-dcrt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.895460 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util" (OuterVolumeSpecName: "util") pod "a04b076e-790c-44cc-8aab-b77901dceadb" (UID: "a04b076e-790c-44cc-8aab-b77901dceadb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.975723 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.975766 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a04b076e-790c-44cc-8aab-b77901dceadb-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:14 crc kubenswrapper[4847]: I0218 00:42:14.975779 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcrt7\" (UniqueName: \"kubernetes.io/projected/a04b076e-790c-44cc-8aab-b77901dceadb-kube-api-access-dcrt7\") on node \"crc\" DevicePath \"\"" Feb 18 00:42:15 crc kubenswrapper[4847]: I0218 00:42:15.324277 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" event={"ID":"a04b076e-790c-44cc-8aab-b77901dceadb","Type":"ContainerDied","Data":"bdfa820fc870b1a97f971829fa3d830a24b921ec1c8242c9cb1fdff030b1648b"} Feb 18 00:42:15 crc kubenswrapper[4847]: I0218 00:42:15.324630 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdfa820fc870b1a97f971829fa3d830a24b921ec1c8242c9cb1fdff030b1648b" Feb 18 00:42:15 crc kubenswrapper[4847]: I0218 00:42:15.324409 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.369885 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2"] Feb 18 00:42:24 crc kubenswrapper[4847]: E0218 00:42:24.370804 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="extract" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.370821 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="extract" Feb 18 00:42:24 crc kubenswrapper[4847]: E0218 00:42:24.370842 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="pull" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.370850 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="pull" Feb 18 00:42:24 crc kubenswrapper[4847]: E0218 00:42:24.370864 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="util" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.370872 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="util" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.371018 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a04b076e-790c-44cc-8aab-b77901dceadb" containerName="extract" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.371668 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.373197 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p75hc" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.373933 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.376461 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.376855 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.377048 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.389761 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2"] Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.440672 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.440861 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-webhook-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.440897 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9nxk\" (UniqueName: \"kubernetes.io/projected/817d642e-8dbe-4edb-81b7-21e3b47751bb-kube-api-access-t9nxk\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.542908 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.542954 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-webhook-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.542986 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9nxk\" (UniqueName: \"kubernetes.io/projected/817d642e-8dbe-4edb-81b7-21e3b47751bb-kube-api-access-t9nxk\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.556486 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-apiservice-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.556912 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/817d642e-8dbe-4edb-81b7-21e3b47751bb-webhook-cert\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.573366 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9nxk\" (UniqueName: \"kubernetes.io/projected/817d642e-8dbe-4edb-81b7-21e3b47751bb-kube-api-access-t9nxk\") pod \"metallb-operator-controller-manager-5b8b64c6dc-l56n2\" (UID: \"817d642e-8dbe-4edb-81b7-21e3b47751bb\") " pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.684181 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r"] Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.685108 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.686807 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.688965 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5ctsn" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.689595 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.699718 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.751162 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r"] Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.846132 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2f76\" (UniqueName: \"kubernetes.io/projected/2ee3c157-6f35-403f-a563-00c85ea7cdbf-kube-api-access-p2f76\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.846209 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-webhook-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.846231 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.947933 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-webhook-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.948233 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.948308 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2f76\" (UniqueName: \"kubernetes.io/projected/2ee3c157-6f35-403f-a563-00c85ea7cdbf-kube-api-access-p2f76\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.951781 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-apiservice-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.951796 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2ee3c157-6f35-403f-a563-00c85ea7cdbf-webhook-cert\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:24 crc kubenswrapper[4847]: I0218 00:42:24.967820 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2f76\" (UniqueName: \"kubernetes.io/projected/2ee3c157-6f35-403f-a563-00c85ea7cdbf-kube-api-access-p2f76\") pod \"metallb-operator-webhook-server-7c8b4689bf-5lg4r\" (UID: \"2ee3c157-6f35-403f-a563-00c85ea7cdbf\") " pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:25 crc kubenswrapper[4847]: I0218 00:42:25.006651 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:25 crc kubenswrapper[4847]: I0218 00:42:25.236520 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2"] Feb 18 00:42:25 crc kubenswrapper[4847]: I0218 00:42:25.260952 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r"] Feb 18 00:42:25 crc kubenswrapper[4847]: W0218 00:42:25.265781 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ee3c157_6f35_403f_a563_00c85ea7cdbf.slice/crio-926369872c27644d7da8753751ae3d611e07f16587b0a2f5d62273edd67313db WatchSource:0}: Error finding container 926369872c27644d7da8753751ae3d611e07f16587b0a2f5d62273edd67313db: Status 404 returned error can't find the container with id 926369872c27644d7da8753751ae3d611e07f16587b0a2f5d62273edd67313db Feb 18 00:42:25 crc kubenswrapper[4847]: I0218 00:42:25.400392 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" event={"ID":"2ee3c157-6f35-403f-a563-00c85ea7cdbf","Type":"ContainerStarted","Data":"926369872c27644d7da8753751ae3d611e07f16587b0a2f5d62273edd67313db"} Feb 18 00:42:25 crc kubenswrapper[4847]: I0218 00:42:25.401526 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" event={"ID":"817d642e-8dbe-4edb-81b7-21e3b47751bb","Type":"ContainerStarted","Data":"b5c10ba42f78ab1b53216213dbbb64cc8127041c56d64bef177c070c3a41a4ba"} Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.469202 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" event={"ID":"817d642e-8dbe-4edb-81b7-21e3b47751bb","Type":"ContainerStarted","Data":"bf2ac01ed9163496687d65d5694aadd9708920a4e7b099f74e532cdd64101751"} Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.469750 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.471654 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" event={"ID":"2ee3c157-6f35-403f-a563-00c85ea7cdbf","Type":"ContainerStarted","Data":"9bec4e179f57d4aeb05db361cb389105671e732c2f5e584f16f8e90582dcf19f"} Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.471745 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.509106 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" podStartSLOduration=2.222373853 podStartE2EDuration="9.509084227s" podCreationTimestamp="2026-02-18 00:42:24 +0000 UTC" firstStartedPulling="2026-02-18 00:42:25.252220609 +0000 UTC m=+1018.629571551" lastFinishedPulling="2026-02-18 00:42:32.538930963 +0000 UTC m=+1025.916281925" observedRunningTime="2026-02-18 00:42:33.495192202 +0000 UTC m=+1026.872543154" watchObservedRunningTime="2026-02-18 00:42:33.509084227 +0000 UTC m=+1026.886435169" Feb 18 00:42:33 crc kubenswrapper[4847]: I0218 00:42:33.520742 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" podStartSLOduration=2.21969365 podStartE2EDuration="9.520725689s" podCreationTimestamp="2026-02-18 00:42:24 +0000 UTC" firstStartedPulling="2026-02-18 00:42:25.268291645 +0000 UTC m=+1018.645642587" lastFinishedPulling="2026-02-18 00:42:32.569323674 +0000 UTC m=+1025.946674626" observedRunningTime="2026-02-18 00:42:33.519260915 +0000 UTC m=+1026.896611857" watchObservedRunningTime="2026-02-18 00:42:33.520725689 +0000 UTC m=+1026.898076641" Feb 18 00:42:45 crc kubenswrapper[4847]: I0218 00:42:45.014330 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7c8b4689bf-5lg4r" Feb 18 00:43:04 crc kubenswrapper[4847]: I0218 00:43:04.704382 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5b8b64c6dc-l56n2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.458515 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-m56k2"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.462069 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.464310 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.464787 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.466114 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rmg7v" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.467951 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.468970 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.472255 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.486357 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.565189 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-45fx5"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.566653 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: W0218 00:43:05.568077 4847 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: secrets "metallb-memberlist" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 18 00:43:05 crc kubenswrapper[4847]: W0218 00:43:05.568111 4847 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: secrets "speaker-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.568119 4847 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"metallb-memberlist\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.568133 4847 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:43:05 crc kubenswrapper[4847]: W0218 00:43:05.568171 4847 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-qddl6": failed to list *v1.Secret: secrets "speaker-dockercfg-qddl6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.568186 4847 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-qddl6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"speaker-dockercfg-qddl6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.568505 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.581907 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-ppjbv"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.583170 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.585727 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.596742 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5737c80-5d5b-4e38-8826-620411606e6a-frr-startup\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.596783 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.596820 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-reloader\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597295 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597344 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvl6\" (UniqueName: \"kubernetes.io/projected/1734f7d8-892a-4a2b-8e64-224d75324d06-kube-api-access-7cvl6\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597370 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-metrics\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597412 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-conf\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597461 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-ppjbv"] Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597487 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mss7d\" (UniqueName: \"kubernetes.io/projected/d5737c80-5d5b-4e38-8826-620411606e6a-kube-api-access-mss7d\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.597515 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-sockets\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.698844 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvl6\" (UniqueName: \"kubernetes.io/projected/1734f7d8-892a-4a2b-8e64-224d75324d06-kube-api-access-7cvl6\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.698886 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-metrics\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699114 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-conf\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699138 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kfhv\" (UniqueName: \"kubernetes.io/projected/02e790ed-2120-428f-9015-81031198b2ae-kube-api-access-7kfhv\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699157 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mss7d\" (UniqueName: \"kubernetes.io/projected/d5737c80-5d5b-4e38-8826-620411606e6a-kube-api-access-mss7d\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699179 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metrics-certs\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699193 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metallb-excludel2\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699207 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-sockets\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699226 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxph5\" (UniqueName: \"kubernetes.io/projected/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-kube-api-access-dxph5\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699270 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-cert\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699291 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-metrics-certs\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699306 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5737c80-5d5b-4e38-8826-620411606e6a-frr-startup\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699327 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699343 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-memberlist\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699368 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-reloader\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699402 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699439 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-metrics\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.699513 4847 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.699563 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs podName:d5737c80-5d5b-4e38-8826-620411606e6a nodeName:}" failed. No retries permitted until 2026-02-18 00:43:06.199547563 +0000 UTC m=+1059.576898505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs") pod "frr-k8s-m56k2" (UID: "d5737c80-5d5b-4e38-8826-620411606e6a") : secret "frr-k8s-certs-secret" not found Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699691 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-conf\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.699801 4847 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.699845 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-frr-sockets\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: E0218 00:43:05.699854 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert podName:1734f7d8-892a-4a2b-8e64-224d75324d06 nodeName:}" failed. No retries permitted until 2026-02-18 00:43:06.19983462 +0000 UTC m=+1059.577185752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert") pod "frr-k8s-webhook-server-78b44bf5bb-nzx76" (UID: "1734f7d8-892a-4a2b-8e64-224d75324d06") : secret "frr-k8s-webhook-server-cert" not found Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.700070 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d5737c80-5d5b-4e38-8826-620411606e6a-reloader\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.700712 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d5737c80-5d5b-4e38-8826-620411606e6a-frr-startup\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.717638 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mss7d\" (UniqueName: \"kubernetes.io/projected/d5737c80-5d5b-4e38-8826-620411606e6a-kube-api-access-mss7d\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.723465 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvl6\" (UniqueName: \"kubernetes.io/projected/1734f7d8-892a-4a2b-8e64-224d75324d06-kube-api-access-7cvl6\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.800897 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-cert\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.800948 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-metrics-certs\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.800989 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-memberlist\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.801055 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kfhv\" (UniqueName: \"kubernetes.io/projected/02e790ed-2120-428f-9015-81031198b2ae-kube-api-access-7kfhv\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.801080 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metrics-certs\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.801326 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metallb-excludel2\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.801353 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxph5\" (UniqueName: \"kubernetes.io/projected/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-kube-api-access-dxph5\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.802129 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metallb-excludel2\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.804301 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.804847 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-metrics-certs\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.819140 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02e790ed-2120-428f-9015-81031198b2ae-cert\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.822638 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxph5\" (UniqueName: \"kubernetes.io/projected/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-kube-api-access-dxph5\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.825346 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kfhv\" (UniqueName: \"kubernetes.io/projected/02e790ed-2120-428f-9015-81031198b2ae-kube-api-access-7kfhv\") pod \"controller-69bbfbf88f-ppjbv\" (UID: \"02e790ed-2120-428f-9015-81031198b2ae\") " pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:05 crc kubenswrapper[4847]: I0218 00:43:05.898676 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.206448 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.207030 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.210386 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1734f7d8-892a-4a2b-8e64-224d75324d06-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-nzx76\" (UID: \"1734f7d8-892a-4a2b-8e64-224d75324d06\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.210754 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5737c80-5d5b-4e38-8826-620411606e6a-metrics-certs\") pod \"frr-k8s-m56k2\" (UID: \"d5737c80-5d5b-4e38-8826-620411606e6a\") " pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.353203 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-ppjbv"] Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.380756 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.389547 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.448521 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.457244 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-metrics-certs\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.629386 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.648267 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cccb9a0-0f8c-44b0-9d0e-e31bcf146024-memberlist\") pod \"speaker-45fx5\" (UID: \"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024\") " pod="metallb-system/speaker-45fx5" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.752269 4847 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qddl6" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.753152 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"6fb57a256b504206b031eb95814eb50156a10c0c2c4a917727fc810fe3142ac6"} Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.754650 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppjbv" event={"ID":"02e790ed-2120-428f-9015-81031198b2ae","Type":"ContainerStarted","Data":"5166e246946fffbfcf654ef731d7d16a05da954a220b0fd38aa3622031154b19"} Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.754700 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppjbv" event={"ID":"02e790ed-2120-428f-9015-81031198b2ae","Type":"ContainerStarted","Data":"add139a9675d45f79e2b15d9af65238b8178b4b6ebc2023f19f2a8536d8302e2"} Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.754719 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-ppjbv" event={"ID":"02e790ed-2120-428f-9015-81031198b2ae","Type":"ContainerStarted","Data":"719e6574aca853eb1515718d4fa226b694075286f93a212f768aff2814bc4f19"} Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.754838 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.779550 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-ppjbv" podStartSLOduration=1.779530758 podStartE2EDuration="1.779530758s" podCreationTimestamp="2026-02-18 00:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:43:06.77234483 +0000 UTC m=+1060.149695772" watchObservedRunningTime="2026-02-18 00:43:06.779530758 +0000 UTC m=+1060.156881700" Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.781002 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-45fx5" Feb 18 00:43:06 crc kubenswrapper[4847]: W0218 00:43:06.798324 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cccb9a0_0f8c_44b0_9d0e_e31bcf146024.slice/crio-85616f64083164293bb9f1a01c6e2a25c767a45a7592ac468b4a5a6fff39b364 WatchSource:0}: Error finding container 85616f64083164293bb9f1a01c6e2a25c767a45a7592ac468b4a5a6fff39b364: Status 404 returned error can't find the container with id 85616f64083164293bb9f1a01c6e2a25c767a45a7592ac468b4a5a6fff39b364 Feb 18 00:43:06 crc kubenswrapper[4847]: I0218 00:43:06.915891 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76"] Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.766552 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" event={"ID":"1734f7d8-892a-4a2b-8e64-224d75324d06","Type":"ContainerStarted","Data":"28e7cee58b0c2d2adcd3de7a19013614cb49b5c2c263705a50f5230a74c8af16"} Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.769378 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-45fx5" event={"ID":"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024","Type":"ContainerStarted","Data":"0289787a8d0fa4179aa7119b99e30b2eaec27c0b52ef90f3a80812cb8ac45b76"} Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.769457 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-45fx5" event={"ID":"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024","Type":"ContainerStarted","Data":"08ebd9f9f583fea9eac4c7eb1e0d69ec63a693998cc7b413ac4d6c3526d94833"} Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.769481 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-45fx5" event={"ID":"0cccb9a0-0f8c-44b0-9d0e-e31bcf146024","Type":"ContainerStarted","Data":"85616f64083164293bb9f1a01c6e2a25c767a45a7592ac468b4a5a6fff39b364"} Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.770124 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-45fx5" Feb 18 00:43:07 crc kubenswrapper[4847]: I0218 00:43:07.798880 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-45fx5" podStartSLOduration=2.798858454 podStartE2EDuration="2.798858454s" podCreationTimestamp="2026-02-18 00:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:43:07.792431494 +0000 UTC m=+1061.169782466" watchObservedRunningTime="2026-02-18 00:43:07.798858454 +0000 UTC m=+1061.176209416" Feb 18 00:43:14 crc kubenswrapper[4847]: I0218 00:43:14.830307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" event={"ID":"1734f7d8-892a-4a2b-8e64-224d75324d06","Type":"ContainerStarted","Data":"dc2de7600e371eab6fa4331546c99ea3b608be9c4ba717eadab78f85aac40782"} Feb 18 00:43:14 crc kubenswrapper[4847]: I0218 00:43:14.830906 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:14 crc kubenswrapper[4847]: I0218 00:43:14.833254 4847 generic.go:334] "Generic (PLEG): container finished" podID="d5737c80-5d5b-4e38-8826-620411606e6a" containerID="1a222c695b11984bbefa7d526004704d61254326df3ef8c6f130bfa2260be99a" exitCode=0 Feb 18 00:43:14 crc kubenswrapper[4847]: I0218 00:43:14.833305 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerDied","Data":"1a222c695b11984bbefa7d526004704d61254326df3ef8c6f130bfa2260be99a"} Feb 18 00:43:14 crc kubenswrapper[4847]: I0218 00:43:14.866542 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" podStartSLOduration=2.767710605 podStartE2EDuration="9.866505071s" podCreationTimestamp="2026-02-18 00:43:05 +0000 UTC" firstStartedPulling="2026-02-18 00:43:06.926472567 +0000 UTC m=+1060.303823509" lastFinishedPulling="2026-02-18 00:43:14.025267033 +0000 UTC m=+1067.402617975" observedRunningTime="2026-02-18 00:43:14.859554359 +0000 UTC m=+1068.236905311" watchObservedRunningTime="2026-02-18 00:43:14.866505071 +0000 UTC m=+1068.243856053" Feb 18 00:43:15 crc kubenswrapper[4847]: I0218 00:43:15.842514 4847 generic.go:334] "Generic (PLEG): container finished" podID="d5737c80-5d5b-4e38-8826-620411606e6a" containerID="07a711f509598011a5902bf4e4171a2dcecccd3bf26d08e8bc6dc91819bbfb99" exitCode=0 Feb 18 00:43:15 crc kubenswrapper[4847]: I0218 00:43:15.842655 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerDied","Data":"07a711f509598011a5902bf4e4171a2dcecccd3bf26d08e8bc6dc91819bbfb99"} Feb 18 00:43:16 crc kubenswrapper[4847]: I0218 00:43:16.856827 4847 generic.go:334] "Generic (PLEG): container finished" podID="d5737c80-5d5b-4e38-8826-620411606e6a" containerID="61915161d87cf106bb1cc53cd0f216dc8c3a3b48bebcc9e23baabd58666dbf23" exitCode=0 Feb 18 00:43:16 crc kubenswrapper[4847]: I0218 00:43:16.856965 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerDied","Data":"61915161d87cf106bb1cc53cd0f216dc8c3a3b48bebcc9e23baabd58666dbf23"} Feb 18 00:43:17 crc kubenswrapper[4847]: I0218 00:43:17.881942 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"cbabe1c32fc9c7fe0376738befe1a49d9e54b24d8d857add35b906ea91ba78fe"} Feb 18 00:43:17 crc kubenswrapper[4847]: I0218 00:43:17.882342 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"41f7fbf2676d56ffbcf82301dba8ff46d6b6c06e9997b939571b8472cac557e8"} Feb 18 00:43:17 crc kubenswrapper[4847]: I0218 00:43:17.882365 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"25ee52e8bd2fe37ff3a0824c47ab5d8f04639a1bf1b6b5ddc2eae1e2d7e28278"} Feb 18 00:43:17 crc kubenswrapper[4847]: I0218 00:43:17.882385 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"3001fd7b3d54969629dc25d2a40953d374f95364a535e0a2be2ee32a8b8cb3c3"} Feb 18 00:43:18 crc kubenswrapper[4847]: I0218 00:43:18.911504 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"40b2662edfa938da76ce4059d8eb8df03e0cccfaea036322354c6adf3bc84802"} Feb 18 00:43:18 crc kubenswrapper[4847]: I0218 00:43:18.912325 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:18 crc kubenswrapper[4847]: I0218 00:43:18.912349 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m56k2" event={"ID":"d5737c80-5d5b-4e38-8826-620411606e6a","Type":"ContainerStarted","Data":"b67809b2dc46ac0939cec1b103f141ff7748030474a97e23202faa1129032780"} Feb 18 00:43:18 crc kubenswrapper[4847]: I0218 00:43:18.973713 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-m56k2" podStartSLOduration=6.503171818 podStartE2EDuration="13.973690163s" podCreationTimestamp="2026-02-18 00:43:05 +0000 UTC" firstStartedPulling="2026-02-18 00:43:06.536730376 +0000 UTC m=+1059.914081318" lastFinishedPulling="2026-02-18 00:43:14.007248721 +0000 UTC m=+1067.384599663" observedRunningTime="2026-02-18 00:43:18.972372442 +0000 UTC m=+1072.349723414" watchObservedRunningTime="2026-02-18 00:43:18.973690163 +0000 UTC m=+1072.351041115" Feb 18 00:43:21 crc kubenswrapper[4847]: I0218 00:43:21.381268 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:21 crc kubenswrapper[4847]: I0218 00:43:21.447935 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:23 crc kubenswrapper[4847]: I0218 00:43:23.491993 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:43:23 crc kubenswrapper[4847]: I0218 00:43:23.492086 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:43:25 crc kubenswrapper[4847]: I0218 00:43:25.905594 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-ppjbv" Feb 18 00:43:26 crc kubenswrapper[4847]: I0218 00:43:26.395402 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-nzx76" Feb 18 00:43:26 crc kubenswrapper[4847]: I0218 00:43:26.786655 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-45fx5" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.490785 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.492578 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.498872 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.507626 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.507650 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8frkb" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.509250 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmd5\" (UniqueName: \"kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5\") pod \"openstack-operator-index-qxzfp\" (UID: \"df8c5b64-c6aa-4976-85af-8b96b92ac3bb\") " pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.512327 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.610116 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmd5\" (UniqueName: \"kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5\") pod \"openstack-operator-index-qxzfp\" (UID: \"df8c5b64-c6aa-4976-85af-8b96b92ac3bb\") " pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.635723 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmd5\" (UniqueName: \"kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5\") pod \"openstack-operator-index-qxzfp\" (UID: \"df8c5b64-c6aa-4976-85af-8b96b92ac3bb\") " pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:29 crc kubenswrapper[4847]: I0218 00:43:29.844038 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:30 crc kubenswrapper[4847]: I0218 00:43:30.421529 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:31 crc kubenswrapper[4847]: I0218 00:43:31.033224 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qxzfp" event={"ID":"df8c5b64-c6aa-4976-85af-8b96b92ac3bb","Type":"ContainerStarted","Data":"400c1d380407d5173eed26e106b19240e77d8fc8daa397569b312c7f19bf4616"} Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.048219 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.451793 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-s6d6x"] Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.452843 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.479646 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s6d6x"] Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.559084 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlnl\" (UniqueName: \"kubernetes.io/projected/57755abc-d7e9-479b-812c-6ddacee7d1be-kube-api-access-mrlnl\") pod \"openstack-operator-index-s6d6x\" (UID: \"57755abc-d7e9-479b-812c-6ddacee7d1be\") " pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.661489 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrlnl\" (UniqueName: \"kubernetes.io/projected/57755abc-d7e9-479b-812c-6ddacee7d1be-kube-api-access-mrlnl\") pod \"openstack-operator-index-s6d6x\" (UID: \"57755abc-d7e9-479b-812c-6ddacee7d1be\") " pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.685515 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrlnl\" (UniqueName: \"kubernetes.io/projected/57755abc-d7e9-479b-812c-6ddacee7d1be-kube-api-access-mrlnl\") pod \"openstack-operator-index-s6d6x\" (UID: \"57755abc-d7e9-479b-812c-6ddacee7d1be\") " pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:32 crc kubenswrapper[4847]: I0218 00:43:32.784733 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.054789 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qxzfp" event={"ID":"df8c5b64-c6aa-4976-85af-8b96b92ac3bb","Type":"ContainerStarted","Data":"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c"} Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.054927 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qxzfp" podUID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" containerName="registry-server" containerID="cri-o://6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c" gracePeriod=2 Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.080973 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qxzfp" podStartSLOduration=1.89486483 podStartE2EDuration="4.080948512s" podCreationTimestamp="2026-02-18 00:43:29 +0000 UTC" firstStartedPulling="2026-02-18 00:43:30.429709153 +0000 UTC m=+1083.807060125" lastFinishedPulling="2026-02-18 00:43:32.615792865 +0000 UTC m=+1085.993143807" observedRunningTime="2026-02-18 00:43:33.079405446 +0000 UTC m=+1086.456756398" watchObservedRunningTime="2026-02-18 00:43:33.080948512 +0000 UTC m=+1086.458299494" Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.112933 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-s6d6x"] Feb 18 00:43:33 crc kubenswrapper[4847]: W0218 00:43:33.170388 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755abc_d7e9_479b_812c_6ddacee7d1be.slice/crio-6a8b53cb2b170a2e3636d10da70cecc3424762fae9edfe1aff936a2f41bfb54a WatchSource:0}: Error finding container 6a8b53cb2b170a2e3636d10da70cecc3424762fae9edfe1aff936a2f41bfb54a: Status 404 returned error can't find the container with id 6a8b53cb2b170a2e3636d10da70cecc3424762fae9edfe1aff936a2f41bfb54a Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.463456 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.616151 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmd5\" (UniqueName: \"kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5\") pod \"df8c5b64-c6aa-4976-85af-8b96b92ac3bb\" (UID: \"df8c5b64-c6aa-4976-85af-8b96b92ac3bb\") " Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.621760 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5" (OuterVolumeSpecName: "kube-api-access-pxmd5") pod "df8c5b64-c6aa-4976-85af-8b96b92ac3bb" (UID: "df8c5b64-c6aa-4976-85af-8b96b92ac3bb"). InnerVolumeSpecName "kube-api-access-pxmd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:43:33 crc kubenswrapper[4847]: I0218 00:43:33.718764 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmd5\" (UniqueName: \"kubernetes.io/projected/df8c5b64-c6aa-4976-85af-8b96b92ac3bb-kube-api-access-pxmd5\") on node \"crc\" DevicePath \"\"" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.068866 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s6d6x" event={"ID":"57755abc-d7e9-479b-812c-6ddacee7d1be","Type":"ContainerStarted","Data":"a8b11564c1d7e80106a516afd0bba58e5fa64c535340c83a4ba6cbfecffdc3b2"} Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.069336 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-s6d6x" event={"ID":"57755abc-d7e9-479b-812c-6ddacee7d1be","Type":"ContainerStarted","Data":"6a8b53cb2b170a2e3636d10da70cecc3424762fae9edfe1aff936a2f41bfb54a"} Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.071456 4847 generic.go:334] "Generic (PLEG): container finished" podID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" containerID="6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c" exitCode=0 Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.071506 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qxzfp" event={"ID":"df8c5b64-c6aa-4976-85af-8b96b92ac3bb","Type":"ContainerDied","Data":"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c"} Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.071534 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qxzfp" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.071561 4847 scope.go:117] "RemoveContainer" containerID="6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.071544 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qxzfp" event={"ID":"df8c5b64-c6aa-4976-85af-8b96b92ac3bb","Type":"ContainerDied","Data":"400c1d380407d5173eed26e106b19240e77d8fc8daa397569b312c7f19bf4616"} Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.106805 4847 scope.go:117] "RemoveContainer" containerID="6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c" Feb 18 00:43:34 crc kubenswrapper[4847]: E0218 00:43:34.107425 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c\": container with ID starting with 6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c not found: ID does not exist" containerID="6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.107499 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c"} err="failed to get container status \"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c\": rpc error: code = NotFound desc = could not find container \"6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c\": container with ID starting with 6bdeacc447c991cfa9d50be5c5366ac828521da0d516dd590549a6889efe6a9c not found: ID does not exist" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.123583 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-s6d6x" podStartSLOduration=2.064029688 podStartE2EDuration="2.123546701s" podCreationTimestamp="2026-02-18 00:43:32 +0000 UTC" firstStartedPulling="2026-02-18 00:43:33.174794408 +0000 UTC m=+1086.552145350" lastFinishedPulling="2026-02-18 00:43:33.234311401 +0000 UTC m=+1086.611662363" observedRunningTime="2026-02-18 00:43:34.091979783 +0000 UTC m=+1087.469330755" watchObservedRunningTime="2026-02-18 00:43:34.123546701 +0000 UTC m=+1087.500897683" Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.131791 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:34 crc kubenswrapper[4847]: I0218 00:43:34.143082 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qxzfp"] Feb 18 00:43:35 crc kubenswrapper[4847]: I0218 00:43:35.429463 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" path="/var/lib/kubelet/pods/df8c5b64-c6aa-4976-85af-8b96b92ac3bb/volumes" Feb 18 00:43:36 crc kubenswrapper[4847]: I0218 00:43:36.385558 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-m56k2" Feb 18 00:43:42 crc kubenswrapper[4847]: I0218 00:43:42.786263 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:42 crc kubenswrapper[4847]: I0218 00:43:42.786697 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:42 crc kubenswrapper[4847]: I0218 00:43:42.866794 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:43 crc kubenswrapper[4847]: I0218 00:43:43.206838 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-s6d6x" Feb 18 00:43:53 crc kubenswrapper[4847]: I0218 00:43:53.492052 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:43:53 crc kubenswrapper[4847]: I0218 00:43:53.492754 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.722877 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5"] Feb 18 00:43:58 crc kubenswrapper[4847]: E0218 00:43:58.726157 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" containerName="registry-server" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.726214 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" containerName="registry-server" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.726467 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="df8c5b64-c6aa-4976-85af-8b96b92ac3bb" containerName="registry-server" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.728331 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.731863 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-726l6" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.746580 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5"] Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.879578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.880578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlq29\" (UniqueName: \"kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.880889 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.982917 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.983088 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlq29\" (UniqueName: \"kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.983170 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.983868 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:58 crc kubenswrapper[4847]: I0218 00:43:58.984003 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:59 crc kubenswrapper[4847]: I0218 00:43:59.021525 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlq29\" (UniqueName: \"kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29\") pod \"b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:59 crc kubenswrapper[4847]: I0218 00:43:59.056941 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:43:59 crc kubenswrapper[4847]: I0218 00:43:59.604683 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5"] Feb 18 00:43:59 crc kubenswrapper[4847]: W0218 00:43:59.614392 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6453624a_f0d1_4831_a2ca_749f87f88542.slice/crio-af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362 WatchSource:0}: Error finding container af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362: Status 404 returned error can't find the container with id af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362 Feb 18 00:44:00 crc kubenswrapper[4847]: I0218 00:44:00.387278 4847 generic.go:334] "Generic (PLEG): container finished" podID="6453624a-f0d1-4831-a2ca-749f87f88542" containerID="441b0ecd34115572bce2b77fdf539e3877b19978a1ca47398c0f55c02b1491bb" exitCode=0 Feb 18 00:44:00 crc kubenswrapper[4847]: I0218 00:44:00.387331 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" event={"ID":"6453624a-f0d1-4831-a2ca-749f87f88542","Type":"ContainerDied","Data":"441b0ecd34115572bce2b77fdf539e3877b19978a1ca47398c0f55c02b1491bb"} Feb 18 00:44:00 crc kubenswrapper[4847]: I0218 00:44:00.387368 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" event={"ID":"6453624a-f0d1-4831-a2ca-749f87f88542","Type":"ContainerStarted","Data":"af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362"} Feb 18 00:44:01 crc kubenswrapper[4847]: I0218 00:44:01.402511 4847 generic.go:334] "Generic (PLEG): container finished" podID="6453624a-f0d1-4831-a2ca-749f87f88542" containerID="b23053cd59d0cac945eddbd02e1112d6ee50033a819cb787a098201ac8a0c918" exitCode=0 Feb 18 00:44:01 crc kubenswrapper[4847]: I0218 00:44:01.402560 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" event={"ID":"6453624a-f0d1-4831-a2ca-749f87f88542","Type":"ContainerDied","Data":"b23053cd59d0cac945eddbd02e1112d6ee50033a819cb787a098201ac8a0c918"} Feb 18 00:44:02 crc kubenswrapper[4847]: I0218 00:44:02.417118 4847 generic.go:334] "Generic (PLEG): container finished" podID="6453624a-f0d1-4831-a2ca-749f87f88542" containerID="6f2cb479d182d99638e8f524de4f7f2725f875e9ee2a77bd1ffe58ee39dd8309" exitCode=0 Feb 18 00:44:02 crc kubenswrapper[4847]: I0218 00:44:02.417519 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" event={"ID":"6453624a-f0d1-4831-a2ca-749f87f88542","Type":"ContainerDied","Data":"6f2cb479d182d99638e8f524de4f7f2725f875e9ee2a77bd1ffe58ee39dd8309"} Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.824056 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.982968 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle\") pod \"6453624a-f0d1-4831-a2ca-749f87f88542\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.983063 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlq29\" (UniqueName: \"kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29\") pod \"6453624a-f0d1-4831-a2ca-749f87f88542\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.983158 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util\") pod \"6453624a-f0d1-4831-a2ca-749f87f88542\" (UID: \"6453624a-f0d1-4831-a2ca-749f87f88542\") " Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.984772 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle" (OuterVolumeSpecName: "bundle") pod "6453624a-f0d1-4831-a2ca-749f87f88542" (UID: "6453624a-f0d1-4831-a2ca-749f87f88542"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:44:03 crc kubenswrapper[4847]: I0218 00:44:03.991108 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29" (OuterVolumeSpecName: "kube-api-access-mlq29") pod "6453624a-f0d1-4831-a2ca-749f87f88542" (UID: "6453624a-f0d1-4831-a2ca-749f87f88542"). InnerVolumeSpecName "kube-api-access-mlq29". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.015194 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util" (OuterVolumeSpecName: "util") pod "6453624a-f0d1-4831-a2ca-749f87f88542" (UID: "6453624a-f0d1-4831-a2ca-749f87f88542"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.085505 4847 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.085663 4847 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6453624a-f0d1-4831-a2ca-749f87f88542-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.085697 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlq29\" (UniqueName: \"kubernetes.io/projected/6453624a-f0d1-4831-a2ca-749f87f88542-kube-api-access-mlq29\") on node \"crc\" DevicePath \"\"" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.440297 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" event={"ID":"6453624a-f0d1-4831-a2ca-749f87f88542","Type":"ContainerDied","Data":"af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362"} Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.440352 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2e518b07f38a62145906df3ee4074ba9493214c2d00a63fb8ed7458f6bc362" Feb 18 00:44:04 crc kubenswrapper[4847]: I0218 00:44:04.440371 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.148981 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb"] Feb 18 00:44:12 crc kubenswrapper[4847]: E0218 00:44:12.149946 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="util" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.149969 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="util" Feb 18 00:44:12 crc kubenswrapper[4847]: E0218 00:44:12.149991 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="extract" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.150006 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="extract" Feb 18 00:44:12 crc kubenswrapper[4847]: E0218 00:44:12.150034 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="pull" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.150048 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="pull" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.150290 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6453624a-f0d1-4831-a2ca-749f87f88542" containerName="extract" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.151156 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.153492 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-q78mr" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.192824 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb"] Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.326213 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncb9j\" (UniqueName: \"kubernetes.io/projected/5c8009fe-0ea5-4bd5-a152-73dff9f00145-kube-api-access-ncb9j\") pod \"openstack-operator-controller-init-65cd6ddc4f-mqptb\" (UID: \"5c8009fe-0ea5-4bd5-a152-73dff9f00145\") " pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.427188 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncb9j\" (UniqueName: \"kubernetes.io/projected/5c8009fe-0ea5-4bd5-a152-73dff9f00145-kube-api-access-ncb9j\") pod \"openstack-operator-controller-init-65cd6ddc4f-mqptb\" (UID: \"5c8009fe-0ea5-4bd5-a152-73dff9f00145\") " pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.453469 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncb9j\" (UniqueName: \"kubernetes.io/projected/5c8009fe-0ea5-4bd5-a152-73dff9f00145-kube-api-access-ncb9j\") pod \"openstack-operator-controller-init-65cd6ddc4f-mqptb\" (UID: \"5c8009fe-0ea5-4bd5-a152-73dff9f00145\") " pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.474686 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:12 crc kubenswrapper[4847]: I0218 00:44:12.946341 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb"] Feb 18 00:44:13 crc kubenswrapper[4847]: I0218 00:44:13.543297 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" event={"ID":"5c8009fe-0ea5-4bd5-a152-73dff9f00145","Type":"ContainerStarted","Data":"21e7285555cc23f8be3a0e3e0979cd0e72b6eeb559a9dbbaea96be5756975991"} Feb 18 00:44:17 crc kubenswrapper[4847]: I0218 00:44:17.579764 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" event={"ID":"5c8009fe-0ea5-4bd5-a152-73dff9f00145","Type":"ContainerStarted","Data":"834c6567ddc2107a8ef67992c11e2c65d8cd5ebf73d560b9f75b3bc7cb32e094"} Feb 18 00:44:17 crc kubenswrapper[4847]: I0218 00:44:17.580527 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:17 crc kubenswrapper[4847]: I0218 00:44:17.643671 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" podStartSLOduration=1.959178541 podStartE2EDuration="5.643639891s" podCreationTimestamp="2026-02-18 00:44:12 +0000 UTC" firstStartedPulling="2026-02-18 00:44:12.948072984 +0000 UTC m=+1126.325423926" lastFinishedPulling="2026-02-18 00:44:16.632534334 +0000 UTC m=+1130.009885276" observedRunningTime="2026-02-18 00:44:17.627415679 +0000 UTC m=+1131.004766651" watchObservedRunningTime="2026-02-18 00:44:17.643639891 +0000 UTC m=+1131.020990873" Feb 18 00:44:22 crc kubenswrapper[4847]: I0218 00:44:22.478992 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-65cd6ddc4f-mqptb" Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.492205 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.492590 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.492727 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.493373 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.493430 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186" gracePeriod=600 Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.647689 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186" exitCode=0 Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.647751 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186"} Feb 18 00:44:23 crc kubenswrapper[4847]: I0218 00:44:23.647797 4847 scope.go:117] "RemoveContainer" containerID="2ffcd87b881b6139f9535c89dd0258cbf56290dc9a8d88b06780fd38c9f1e0fa" Feb 18 00:44:24 crc kubenswrapper[4847]: I0218 00:44:24.659953 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764"} Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.831657 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.833508 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.842455 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.843082 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lf4ng" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.849505 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnxr7\" (UniqueName: \"kubernetes.io/projected/c0bb6956-fedb-40ce-9d87-3fa43b468103-kube-api-access-hnxr7\") pod \"barbican-operator-controller-manager-868647ff47-k8v2d\" (UID: \"c0bb6956-fedb-40ce-9d87-3fa43b468103\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.852350 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.853480 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.856970 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4pcgw" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.864950 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.866094 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.868768 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-xd29z" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.872202 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.891916 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.900198 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.901126 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.903444 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-6rd5q" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.917165 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.942717 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.943628 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.946848 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-b4m78" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.951070 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7rdd\" (UniqueName: \"kubernetes.io/projected/6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33-kube-api-access-b7rdd\") pod \"designate-operator-controller-manager-6d8bf5c495-bflmk\" (UID: \"6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.951119 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kz8c\" (UniqueName: \"kubernetes.io/projected/0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb-kube-api-access-7kz8c\") pod \"glance-operator-controller-manager-77987464f4-7vvgv\" (UID: \"0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.951147 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmvk\" (UniqueName: \"kubernetes.io/projected/a93522f3-c6ff-46fb-ab96-0af205914e2f-kube-api-access-wmmvk\") pod \"heat-operator-controller-manager-69f49c598c-xnrms\" (UID: \"a93522f3-c6ff-46fb-ab96-0af205914e2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.951195 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnxr7\" (UniqueName: \"kubernetes.io/projected/c0bb6956-fedb-40ce-9d87-3fa43b468103-kube-api-access-hnxr7\") pod \"barbican-operator-controller-manager-868647ff47-k8v2d\" (UID: \"c0bb6956-fedb-40ce-9d87-3fa43b468103\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.951231 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qxqs\" (UniqueName: \"kubernetes.io/projected/5080d582-df48-411d-ae00-57bb214b3fb1-kube-api-access-6qxqs\") pod \"cinder-operator-controller-manager-5d946d989d-njjgx\" (UID: \"5080d582-df48-411d-ae00-57bb214b3fb1\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.957201 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.958020 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.976451 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qs82n" Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.976582 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms"] Feb 18 00:44:51 crc kubenswrapper[4847]: I0218 00:44:51.988689 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnxr7\" (UniqueName: \"kubernetes.io/projected/c0bb6956-fedb-40ce-9d87-3fa43b468103-kube-api-access-hnxr7\") pod \"barbican-operator-controller-manager-868647ff47-k8v2d\" (UID: \"c0bb6956-fedb-40ce-9d87-3fa43b468103\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.000390 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.001368 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.007976 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.008274 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jkctf" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.022441 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.038557 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.042733 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.043722 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.048781 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-p798x" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052591 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fvj9\" (UniqueName: \"kubernetes.io/projected/a7705c91-5ed6-4a64-b9a1-06af4d223613-kube-api-access-4fvj9\") pod \"horizon-operator-controller-manager-5b9b8895d5-t5sg8\" (UID: \"a7705c91-5ed6-4a64-b9a1-06af4d223613\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052651 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjlg8\" (UniqueName: \"kubernetes.io/projected/bfdfed12-2cd6-4adc-b953-83d17460c270-kube-api-access-qjlg8\") pod \"ironic-operator-controller-manager-554564d7fc-tzkvx\" (UID: \"bfdfed12-2cd6-4adc-b953-83d17460c270\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052726 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qxqs\" (UniqueName: \"kubernetes.io/projected/5080d582-df48-411d-ae00-57bb214b3fb1-kube-api-access-6qxqs\") pod \"cinder-operator-controller-manager-5d946d989d-njjgx\" (UID: \"5080d582-df48-411d-ae00-57bb214b3fb1\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052809 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm4kh\" (UniqueName: \"kubernetes.io/projected/22395d35-6b40-4f53-b3ca-dced6abd4b13-kube-api-access-gm4kh\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052832 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7rdd\" (UniqueName: \"kubernetes.io/projected/6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33-kube-api-access-b7rdd\") pod \"designate-operator-controller-manager-6d8bf5c495-bflmk\" (UID: \"6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052859 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kz8c\" (UniqueName: \"kubernetes.io/projected/0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb-kube-api-access-7kz8c\") pod \"glance-operator-controller-manager-77987464f4-7vvgv\" (UID: \"0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052875 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.052900 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmvk\" (UniqueName: \"kubernetes.io/projected/a93522f3-c6ff-46fb-ab96-0af205914e2f-kube-api-access-wmmvk\") pod \"heat-operator-controller-manager-69f49c598c-xnrms\" (UID: \"a93522f3-c6ff-46fb-ab96-0af205914e2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.055660 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.056362 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.059725 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-pnpfs" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.073373 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.078591 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kz8c\" (UniqueName: \"kubernetes.io/projected/0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb-kube-api-access-7kz8c\") pod \"glance-operator-controller-manager-77987464f4-7vvgv\" (UID: \"0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.085920 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7rdd\" (UniqueName: \"kubernetes.io/projected/6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33-kube-api-access-b7rdd\") pod \"designate-operator-controller-manager-6d8bf5c495-bflmk\" (UID: \"6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.087892 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.088892 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.090109 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qxqs\" (UniqueName: \"kubernetes.io/projected/5080d582-df48-411d-ae00-57bb214b3fb1-kube-api-access-6qxqs\") pod \"cinder-operator-controller-manager-5d946d989d-njjgx\" (UID: \"5080d582-df48-411d-ae00-57bb214b3fb1\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.091484 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-svzfx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.098749 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.102122 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmvk\" (UniqueName: \"kubernetes.io/projected/a93522f3-c6ff-46fb-ab96-0af205914e2f-kube-api-access-wmmvk\") pod \"heat-operator-controller-manager-69f49c598c-xnrms\" (UID: \"a93522f3-c6ff-46fb-ab96-0af205914e2f\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.108015 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.109086 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.114585 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rj2lt" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.118751 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157374 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fvj9\" (UniqueName: \"kubernetes.io/projected/a7705c91-5ed6-4a64-b9a1-06af4d223613-kube-api-access-4fvj9\") pod \"horizon-operator-controller-manager-5b9b8895d5-t5sg8\" (UID: \"a7705c91-5ed6-4a64-b9a1-06af4d223613\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157483 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7c4x\" (UniqueName: \"kubernetes.io/projected/a36027cb-b3fc-45b7-bcef-75e9b7743594-kube-api-access-x7c4x\") pod \"mariadb-operator-controller-manager-6994f66f48-8gg4t\" (UID: \"a36027cb-b3fc-45b7-bcef-75e9b7743594\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157511 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjlg8\" (UniqueName: \"kubernetes.io/projected/bfdfed12-2cd6-4adc-b953-83d17460c270-kube-api-access-qjlg8\") pod \"ironic-operator-controller-manager-554564d7fc-tzkvx\" (UID: \"bfdfed12-2cd6-4adc-b953-83d17460c270\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157551 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6hzq\" (UniqueName: \"kubernetes.io/projected/eb9dda88-61d8-471e-8f59-1f6918e048d0-kube-api-access-r6hzq\") pod \"manila-operator-controller-manager-54f6768c69-9b7bk\" (UID: \"eb9dda88-61d8-471e-8f59-1f6918e048d0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157615 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt675\" (UniqueName: \"kubernetes.io/projected/20119aa4-b1ef-4ac7-9b93-af64593b22b3-kube-api-access-wt675\") pod \"keystone-operator-controller-manager-b4d948c87-68zsz\" (UID: \"20119aa4-b1ef-4ac7-9b93-af64593b22b3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.157680 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm4kh\" (UniqueName: \"kubernetes.io/projected/22395d35-6b40-4f53-b3ca-dced6abd4b13-kube-api-access-gm4kh\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.157738 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.157799 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:52.657782596 +0000 UTC m=+1166.035133538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.164354 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.183754 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.184836 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.189801 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjlg8\" (UniqueName: \"kubernetes.io/projected/bfdfed12-2cd6-4adc-b953-83d17460c270-kube-api-access-qjlg8\") pod \"ironic-operator-controller-manager-554564d7fc-tzkvx\" (UID: \"bfdfed12-2cd6-4adc-b953-83d17460c270\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.191879 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm4kh\" (UniqueName: \"kubernetes.io/projected/22395d35-6b40-4f53-b3ca-dced6abd4b13-kube-api-access-gm4kh\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.203563 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.217007 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fvj9\" (UniqueName: \"kubernetes.io/projected/a7705c91-5ed6-4a64-b9a1-06af4d223613-kube-api-access-4fvj9\") pod \"horizon-operator-controller-manager-5b9b8895d5-t5sg8\" (UID: \"a7705c91-5ed6-4a64-b9a1-06af4d223613\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.228524 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.228999 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.229961 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.234960 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-l9ppk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.242380 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.243436 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.246246 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-68nkq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.252705 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.261522 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.262955 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.264484 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7c4x\" (UniqueName: \"kubernetes.io/projected/a36027cb-b3fc-45b7-bcef-75e9b7743594-kube-api-access-x7c4x\") pod \"mariadb-operator-controller-manager-6994f66f48-8gg4t\" (UID: \"a36027cb-b3fc-45b7-bcef-75e9b7743594\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.264547 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4cj\" (UniqueName: \"kubernetes.io/projected/82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e-kube-api-access-9v4cj\") pod \"nova-operator-controller-manager-567668f5cf-l2sl6\" (UID: \"82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.264571 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6hzq\" (UniqueName: \"kubernetes.io/projected/eb9dda88-61d8-471e-8f59-1f6918e048d0-kube-api-access-r6hzq\") pod \"manila-operator-controller-manager-54f6768c69-9b7bk\" (UID: \"eb9dda88-61d8-471e-8f59-1f6918e048d0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.264611 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjkg\" (UniqueName: \"kubernetes.io/projected/67706a3a-2985-42f6-9820-21bf9abc77fc-kube-api-access-4bjkg\") pod \"neutron-operator-controller-manager-64ddbf8bb-zft7w\" (UID: \"67706a3a-2985-42f6-9820-21bf9abc77fc\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.264639 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt675\" (UniqueName: \"kubernetes.io/projected/20119aa4-b1ef-4ac7-9b93-af64593b22b3-kube-api-access-wt675\") pod \"keystone-operator-controller-manager-b4d948c87-68zsz\" (UID: \"20119aa4-b1ef-4ac7-9b93-af64593b22b3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.265550 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-x6msk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.266272 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.285034 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7c4x\" (UniqueName: \"kubernetes.io/projected/a36027cb-b3fc-45b7-bcef-75e9b7743594-kube-api-access-x7c4x\") pod \"mariadb-operator-controller-manager-6994f66f48-8gg4t\" (UID: \"a36027cb-b3fc-45b7-bcef-75e9b7743594\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.286450 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6hzq\" (UniqueName: \"kubernetes.io/projected/eb9dda88-61d8-471e-8f59-1f6918e048d0-kube-api-access-r6hzq\") pod \"manila-operator-controller-manager-54f6768c69-9b7bk\" (UID: \"eb9dda88-61d8-471e-8f59-1f6918e048d0\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.287156 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt675\" (UniqueName: \"kubernetes.io/projected/20119aa4-b1ef-4ac7-9b93-af64593b22b3-kube-api-access-wt675\") pod \"keystone-operator-controller-manager-b4d948c87-68zsz\" (UID: \"20119aa4-b1ef-4ac7-9b93-af64593b22b3\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.292488 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.310875 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.342146 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.363931 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.365027 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.366543 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjkg\" (UniqueName: \"kubernetes.io/projected/67706a3a-2985-42f6-9820-21bf9abc77fc-kube-api-access-4bjkg\") pod \"neutron-operator-controller-manager-64ddbf8bb-zft7w\" (UID: \"67706a3a-2985-42f6-9820-21bf9abc77fc\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.366661 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chlc\" (UniqueName: \"kubernetes.io/projected/082203f6-e5fd-4dd3-8b94-2a46247155d9-kube-api-access-9chlc\") pod \"octavia-operator-controller-manager-69f8888797-4x7fq\" (UID: \"082203f6-e5fd-4dd3-8b94-2a46247155d9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.366759 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4cj\" (UniqueName: \"kubernetes.io/projected/82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e-kube-api-access-9v4cj\") pod \"nova-operator-controller-manager-567668f5cf-l2sl6\" (UID: \"82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.367454 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-n7pm5" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.367589 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.376003 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.377490 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.379092 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-v5td5" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.382524 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.391020 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjkg\" (UniqueName: \"kubernetes.io/projected/67706a3a-2985-42f6-9820-21bf9abc77fc-kube-api-access-4bjkg\") pod \"neutron-operator-controller-manager-64ddbf8bb-zft7w\" (UID: \"67706a3a-2985-42f6-9820-21bf9abc77fc\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.395827 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.400852 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.409223 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4cj\" (UniqueName: \"kubernetes.io/projected/82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e-kube-api-access-9v4cj\") pod \"nova-operator-controller-manager-567668f5cf-l2sl6\" (UID: \"82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.438208 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.439690 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.444903 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-pm84j" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.451796 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.464365 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.467719 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.467763 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25msn\" (UniqueName: \"kubernetes.io/projected/96061780-bc78-49b0-b23d-2118927130c4-kube-api-access-25msn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.467834 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlxnj\" (UniqueName: \"kubernetes.io/projected/eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02-kube-api-access-dlxnj\") pod \"placement-operator-controller-manager-8497b45c89-q7phq\" (UID: \"eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.467861 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9chlc\" (UniqueName: \"kubernetes.io/projected/082203f6-e5fd-4dd3-8b94-2a46247155d9-kube-api-access-9chlc\") pod \"octavia-operator-controller-manager-69f8888797-4x7fq\" (UID: \"082203f6-e5fd-4dd3-8b94-2a46247155d9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.467903 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk278\" (UniqueName: \"kubernetes.io/projected/c63bde24-5850-4ef7-abba-00b22064d1c7-kube-api-access-sk278\") pod \"ovn-operator-controller-manager-d44cf6b75-cpzb6\" (UID: \"c63bde24-5850-4ef7-abba-00b22064d1c7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.500991 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.509426 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fj256"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.510477 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.513692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9chlc\" (UniqueName: \"kubernetes.io/projected/082203f6-e5fd-4dd3-8b94-2a46247155d9-kube-api-access-9chlc\") pod \"octavia-operator-controller-manager-69f8888797-4x7fq\" (UID: \"082203f6-e5fd-4dd3-8b94-2a46247155d9\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.517011 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h5dqn" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.526383 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.527393 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.529831 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ts427" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.551527 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fj256"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.561645 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.562105 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.573061 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlxnj\" (UniqueName: \"kubernetes.io/projected/eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02-kube-api-access-dlxnj\") pod \"placement-operator-controller-manager-8497b45c89-q7phq\" (UID: \"eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.573137 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk278\" (UniqueName: \"kubernetes.io/projected/c63bde24-5850-4ef7-abba-00b22064d1c7-kube-api-access-sk278\") pod \"ovn-operator-controller-manager-d44cf6b75-cpzb6\" (UID: \"c63bde24-5850-4ef7-abba-00b22064d1c7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.573233 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zzv\" (UniqueName: \"kubernetes.io/projected/8fcd7de2-15f8-4d01-8535-296fb3d8de65-kube-api-access-q9zzv\") pod \"swift-operator-controller-manager-68f46476f-fj256\" (UID: \"8fcd7de2-15f8-4d01-8535-296fb3d8de65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.573257 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lck4n\" (UniqueName: \"kubernetes.io/projected/2726117a-e40a-4a65-b290-404c27c71101-kube-api-access-lck4n\") pod \"telemetry-operator-controller-manager-77b97c6f8f-pcgng\" (UID: \"2726117a-e40a-4a65-b290-404c27c71101\") " pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.573284 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.574017 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.574079 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:53.074060496 +0000 UTC m=+1166.451411438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.574274 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25msn\" (UniqueName: \"kubernetes.io/projected/96061780-bc78-49b0-b23d-2118927130c4-kube-api-access-25msn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.574665 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.592387 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-867dw"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.593502 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.594300 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.598171 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-867dw"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.616662 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.618267 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.619957 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.622105 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.630881 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-r6sdm" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.631699 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4znsc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.642426 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk278\" (UniqueName: \"kubernetes.io/projected/c63bde24-5850-4ef7-abba-00b22064d1c7-kube-api-access-sk278\") pod \"ovn-operator-controller-manager-d44cf6b75-cpzb6\" (UID: \"c63bde24-5850-4ef7-abba-00b22064d1c7\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.700844 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25msn\" (UniqueName: \"kubernetes.io/projected/96061780-bc78-49b0-b23d-2118927130c4-kube-api-access-25msn\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.734459 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlxnj\" (UniqueName: \"kubernetes.io/projected/eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02-kube-api-access-dlxnj\") pod \"placement-operator-controller-manager-8497b45c89-q7phq\" (UID: \"eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.738651 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxdr\" (UniqueName: \"kubernetes.io/projected/5aee4f12-aa12-4168-bc5b-ad6408c5e8d8-kube-api-access-qvxdr\") pod \"test-operator-controller-manager-7866795846-867dw\" (UID: \"5aee4f12-aa12-4168-bc5b-ad6408c5e8d8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.738719 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.742034 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.738850 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zzv\" (UniqueName: \"kubernetes.io/projected/8fcd7de2-15f8-4d01-8535-296fb3d8de65-kube-api-access-q9zzv\") pod \"swift-operator-controller-manager-68f46476f-fj256\" (UID: \"8fcd7de2-15f8-4d01-8535-296fb3d8de65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.746517 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lck4n\" (UniqueName: \"kubernetes.io/projected/2726117a-e40a-4a65-b290-404c27c71101-kube-api-access-lck4n\") pod \"telemetry-operator-controller-manager-77b97c6f8f-pcgng\" (UID: \"2726117a-e40a-4a65-b290-404c27c71101\") " pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.741294 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.750708 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:53.750681315 +0000 UTC m=+1167.128032257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.763841 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.766728 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.778395 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.778522 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.779533 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qf9hj" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.785823 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.810503 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lck4n\" (UniqueName: \"kubernetes.io/projected/2726117a-e40a-4a65-b290-404c27c71101-kube-api-access-lck4n\") pod \"telemetry-operator-controller-manager-77b97c6f8f-pcgng\" (UID: \"2726117a-e40a-4a65-b290-404c27c71101\") " pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.821020 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zzv\" (UniqueName: \"kubernetes.io/projected/8fcd7de2-15f8-4d01-8535-296fb3d8de65-kube-api-access-q9zzv\") pod \"swift-operator-controller-manager-68f46476f-fj256\" (UID: \"8fcd7de2-15f8-4d01-8535-296fb3d8de65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.835814 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.844334 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.853673 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvxdr\" (UniqueName: \"kubernetes.io/projected/5aee4f12-aa12-4168-bc5b-ad6408c5e8d8-kube-api-access-qvxdr\") pod \"test-operator-controller-manager-7866795846-867dw\" (UID: \"5aee4f12-aa12-4168-bc5b-ad6408c5e8d8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.853717 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mnz\" (UniqueName: \"kubernetes.io/projected/5cb3848f-23f4-4037-876f-e390daafc3ba-kube-api-access-t9mnz\") pod \"watcher-operator-controller-manager-5db88f68c-xttb8\" (UID: \"5cb3848f-23f4-4037-876f-e390daafc3ba\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.853768 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.853800 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.853915 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rz2n\" (UniqueName: \"kubernetes.io/projected/6bb1820a-9449-4f74-8523-ee747951291d-kube-api-access-6rz2n\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.857265 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.878391 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvxdr\" (UniqueName: \"kubernetes.io/projected/5aee4f12-aa12-4168-bc5b-ad6408c5e8d8-kube-api-access-qvxdr\") pod \"test-operator-controller-manager-7866795846-867dw\" (UID: \"5aee4f12-aa12-4168-bc5b-ad6408c5e8d8\") " pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.889713 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.891050 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.893044 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-frq54" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.909956 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.924330 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.954114 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx"] Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.955058 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9mnz\" (UniqueName: \"kubernetes.io/projected/5cb3848f-23f4-4037-876f-e390daafc3ba-kube-api-access-t9mnz\") pod \"watcher-operator-controller-manager-5db88f68c-xttb8\" (UID: \"5cb3848f-23f4-4037-876f-e390daafc3ba\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.955105 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.955142 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.955241 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg4h\" (UniqueName: \"kubernetes.io/projected/594a9f71-f227-40eb-89ab-a9f661a63e3a-kube-api-access-tcg4h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z9kpc\" (UID: \"594a9f71-f227-40eb-89ab-a9f661a63e3a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.955270 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rz2n\" (UniqueName: \"kubernetes.io/projected/6bb1820a-9449-4f74-8523-ee747951291d-kube-api-access-6rz2n\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.955981 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.956056 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:53.45603599 +0000 UTC m=+1166.833386932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.960218 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: E0218 00:44:52.960262 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:53.460253259 +0000 UTC m=+1166.837604201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.988237 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rz2n\" (UniqueName: \"kubernetes.io/projected/6bb1820a-9449-4f74-8523-ee747951291d-kube-api-access-6rz2n\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:52 crc kubenswrapper[4847]: I0218 00:44:52.991507 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9mnz\" (UniqueName: \"kubernetes.io/projected/5cb3848f-23f4-4037-876f-e390daafc3ba-kube-api-access-t9mnz\") pod \"watcher-operator-controller-manager-5db88f68c-xttb8\" (UID: \"5cb3848f-23f4-4037-876f-e390daafc3ba\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.056948 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcg4h\" (UniqueName: \"kubernetes.io/projected/594a9f71-f227-40eb-89ab-a9f661a63e3a-kube-api-access-tcg4h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z9kpc\" (UID: \"594a9f71-f227-40eb-89ab-a9f661a63e3a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.087712 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcg4h\" (UniqueName: \"kubernetes.io/projected/594a9f71-f227-40eb-89ab-a9f661a63e3a-kube-api-access-tcg4h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z9kpc\" (UID: \"594a9f71-f227-40eb-89ab-a9f661a63e3a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.159665 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.159838 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.159898 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:54.159879869 +0000 UTC m=+1167.537230811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.186377 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.205940 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.239046 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.272432 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.420464 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk"] Feb 18 00:44:53 crc kubenswrapper[4847]: W0218 00:44:53.423976 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ccdc5e8_7582_4ea6_89f7_b30e7c96ba33.slice/crio-ad0090458b5d3f82765021bd157f47bfe79953cff3c2197e501bb034434795b4 WatchSource:0}: Error finding container ad0090458b5d3f82765021bd157f47bfe79953cff3c2197e501bb034434795b4: Status 404 returned error can't find the container with id ad0090458b5d3f82765021bd157f47bfe79953cff3c2197e501bb034434795b4 Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.464155 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.464441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.464360 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.464576 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.464584 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:54.464547913 +0000 UTC m=+1167.841898855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.464654 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:54.464639115 +0000 UTC m=+1167.841990057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.723799 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.736084 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.755826 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.771586 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.771785 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: E0218 00:44:53.771838 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:55.771824898 +0000 UTC m=+1169.149175830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.774905 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx"] Feb 18 00:44:53 crc kubenswrapper[4847]: W0218 00:44:53.782285 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7705c91_5ed6_4a64_b9a1_06af4d223613.slice/crio-ff3a6754bebfec052e231fa117b56a038d094e11fcd6c78d1935e931e773b828 WatchSource:0}: Error finding container ff3a6754bebfec052e231fa117b56a038d094e11fcd6c78d1935e931e773b828: Status 404 returned error can't find the container with id ff3a6754bebfec052e231fa117b56a038d094e11fcd6c78d1935e931e773b828 Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.788933 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms"] Feb 18 00:44:53 crc kubenswrapper[4847]: W0218 00:44:53.792729 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfdfed12_2cd6_4adc_b953_83d17460c270.slice/crio-4220d3e19e852e1a4662d22fb343da76c585d9281328b97e42ae89d367a25f35 WatchSource:0}: Error finding container 4220d3e19e852e1a4662d22fb343da76c585d9281328b97e42ae89d367a25f35: Status 404 returned error can't find the container with id 4220d3e19e852e1a4662d22fb343da76c585d9281328b97e42ae89d367a25f35 Feb 18 00:44:53 crc kubenswrapper[4847]: W0218 00:44:53.796495 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67706a3a_2985_42f6_9820_21bf9abc77fc.slice/crio-8c363854b30815107220504904dc5263fc167dc9fe5262f2ea08bccf3c34450e WatchSource:0}: Error finding container 8c363854b30815107220504904dc5263fc167dc9fe5262f2ea08bccf3c34450e: Status 404 returned error can't find the container with id 8c363854b30815107220504904dc5263fc167dc9fe5262f2ea08bccf3c34450e Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.796545 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w"] Feb 18 00:44:53 crc kubenswrapper[4847]: W0218 00:44:53.802755 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda93522f3_c6ff_46fb_ab96_0af205914e2f.slice/crio-8e190eb6735d5c980e8db330f3d9175360d4bd4ffc74a33fcb9ad0d46d1d4a39 WatchSource:0}: Error finding container 8e190eb6735d5c980e8db330f3d9175360d4bd4ffc74a33fcb9ad0d46d1d4a39: Status 404 returned error can't find the container with id 8e190eb6735d5c980e8db330f3d9175360d4bd4ffc74a33fcb9ad0d46d1d4a39 Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.805635 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t"] Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.923653 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" event={"ID":"6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33","Type":"ContainerStarted","Data":"ad0090458b5d3f82765021bd157f47bfe79953cff3c2197e501bb034434795b4"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.924901 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" event={"ID":"a36027cb-b3fc-45b7-bcef-75e9b7743594","Type":"ContainerStarted","Data":"cc95b91ab2f00a8b734ff86d59a25b3aadc63eee4da045f43b8473dea69938b8"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.925717 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" event={"ID":"a7705c91-5ed6-4a64-b9a1-06af4d223613","Type":"ContainerStarted","Data":"ff3a6754bebfec052e231fa117b56a038d094e11fcd6c78d1935e931e773b828"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.926582 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" event={"ID":"67706a3a-2985-42f6-9820-21bf9abc77fc","Type":"ContainerStarted","Data":"8c363854b30815107220504904dc5263fc167dc9fe5262f2ea08bccf3c34450e"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.928754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" event={"ID":"a93522f3-c6ff-46fb-ab96-0af205914e2f","Type":"ContainerStarted","Data":"8e190eb6735d5c980e8db330f3d9175360d4bd4ffc74a33fcb9ad0d46d1d4a39"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.929738 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" event={"ID":"20119aa4-b1ef-4ac7-9b93-af64593b22b3","Type":"ContainerStarted","Data":"a29d0ba8bf5ee80cbcfdda9a3d53a97047412360208849a49585f7a057068dba"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.930960 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" event={"ID":"eb9dda88-61d8-471e-8f59-1f6918e048d0","Type":"ContainerStarted","Data":"de859b3933af9ab88bbbab57dde0d9e6f0f85617444344a2760a2c62d342eee8"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.932276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" event={"ID":"0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb","Type":"ContainerStarted","Data":"b33aa51c743e166a60cf5c3b0cb12729553221c0aff28ea63362c05e1b98ca18"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.933287 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" event={"ID":"5080d582-df48-411d-ae00-57bb214b3fb1","Type":"ContainerStarted","Data":"3b4242d023a8d8896f1051c8218a2cb185f00d2f622b543dca787aa7d80f4e2f"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.934629 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" event={"ID":"c0bb6956-fedb-40ce-9d87-3fa43b468103","Type":"ContainerStarted","Data":"6541249508a3a4a3770eef8f090077b05ff44683041a13e038598c5361b44720"} Feb 18 00:44:53 crc kubenswrapper[4847]: I0218 00:44:53.935920 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" event={"ID":"bfdfed12-2cd6-4adc-b953-83d17460c270","Type":"ContainerStarted","Data":"4220d3e19e852e1a4662d22fb343da76c585d9281328b97e42ae89d367a25f35"} Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.120227 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.133801 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.147739 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-fj256"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.155857 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq"] Feb 18 00:44:54 crc kubenswrapper[4847]: W0218 00:44:54.161578 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82cf79bd_1bb2_4c3d_81e5_123ba2cfae5e.slice/crio-300d9ea512c9f05a7b60c65717705b7167633fadd91fb43d897acf16c113d785 WatchSource:0}: Error finding container 300d9ea512c9f05a7b60c65717705b7167633fadd91fb43d897acf16c113d785: Status 404 returned error can't find the container with id 300d9ea512c9f05a7b60c65717705b7167633fadd91fb43d897acf16c113d785 Feb 18 00:44:54 crc kubenswrapper[4847]: W0218 00:44:54.163418 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fcd7de2_15f8_4d01_8535_296fb3d8de65.slice/crio-d1ab1cb667c987eaed3c17f320f30d81c1197280f0b9bf8e96801f7da142f479 WatchSource:0}: Error finding container d1ab1cb667c987eaed3c17f320f30d81c1197280f0b9bf8e96801f7da142f479: Status 404 returned error can't find the container with id d1ab1cb667c987eaed3c17f320f30d81c1197280f0b9bf8e96801f7da142f479 Feb 18 00:44:54 crc kubenswrapper[4847]: W0218 00:44:54.164463 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb3848f_23f4_4037_876f_e390daafc3ba.slice/crio-c7613ef387ea5b442f8471b2702773897b3d94e24b2a116e210b7bb263e966b8 WatchSource:0}: Error finding container c7613ef387ea5b442f8471b2702773897b3d94e24b2a116e210b7bb263e966b8: Status 404 returned error can't find the container with id c7613ef387ea5b442f8471b2702773897b3d94e24b2a116e210b7bb263e966b8 Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.167021 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9mnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-xttb8_openstack-operators(5cb3848f-23f4-4037-876f-e390daafc3ba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.168862 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" podUID="5cb3848f-23f4-4037-876f-e390daafc3ba" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.171409 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v4cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-l2sl6_openstack-operators(82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.172491 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" podUID="82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.176723 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.176872 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.176926 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:56.176908765 +0000 UTC m=+1169.554259707 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.185063 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.191014 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc"] Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.192022 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcg4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-z9kpc_openstack-operators(594a9f71-f227-40eb-89ab-a9f661a63e3a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.192494 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvxdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-867dw_openstack-operators(5aee4f12-aa12-4168-bc5b-ad6408c5e8d8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.196751 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" podUID="5aee4f12-aa12-4168-bc5b-ad6408c5e8d8" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.196799 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podUID="594a9f71-f227-40eb-89ab-a9f661a63e3a" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.196934 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lck4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-77b97c6f8f-pcgng_openstack-operators(2726117a-e40a-4a65-b290-404c27c71101): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.197378 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-867dw"] Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.198423 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" podUID="2726117a-e40a-4a65-b290-404c27c71101" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.202556 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.208915 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8"] Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.487582 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.487654 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.487833 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.487877 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:56.487864837 +0000 UTC m=+1169.865215779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.488192 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.488220 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:44:56.488212865 +0000 UTC m=+1169.865563797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.966908 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" event={"ID":"2726117a-e40a-4a65-b290-404c27c71101","Type":"ContainerStarted","Data":"834b31783a07979b93f5094ebb08df85c6d9339751fe026e5932f873ceabea65"} Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.969487 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" podUID="2726117a-e40a-4a65-b290-404c27c71101" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.970071 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" event={"ID":"5aee4f12-aa12-4168-bc5b-ad6408c5e8d8","Type":"ContainerStarted","Data":"7ed08a2647d10605bff43fd9b171bd9e036b94312c95c5484986f07d0bbc8fb8"} Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.972149 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" podUID="5aee4f12-aa12-4168-bc5b-ad6408c5e8d8" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.972340 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" event={"ID":"eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02","Type":"ContainerStarted","Data":"437bdc26495cd4d515a274afad5470d14a27509264c951332e513024cb3ee26b"} Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.975350 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" event={"ID":"594a9f71-f227-40eb-89ab-a9f661a63e3a","Type":"ContainerStarted","Data":"e489e3be9061f4c5ebe5b71ebb03f9d776be5450a6d04a44f30a714ceff88f19"} Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.977089 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podUID="594a9f71-f227-40eb-89ab-a9f661a63e3a" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.978799 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" event={"ID":"5cb3848f-23f4-4037-876f-e390daafc3ba","Type":"ContainerStarted","Data":"c7613ef387ea5b442f8471b2702773897b3d94e24b2a116e210b7bb263e966b8"} Feb 18 00:44:54 crc kubenswrapper[4847]: E0218 00:44:54.980546 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" podUID="5cb3848f-23f4-4037-876f-e390daafc3ba" Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.984824 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" event={"ID":"082203f6-e5fd-4dd3-8b94-2a46247155d9","Type":"ContainerStarted","Data":"753e63f50729dab45f50a670daab406dfc93c1558d3d5248f375db3dd1bdfc62"} Feb 18 00:44:54 crc kubenswrapper[4847]: I0218 00:44:54.988563 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" event={"ID":"c63bde24-5850-4ef7-abba-00b22064d1c7","Type":"ContainerStarted","Data":"eac86f3925ef1fd9a37e8063a130b8e23fdf1be49eb2f32e2758dc678c8721e7"} Feb 18 00:44:55 crc kubenswrapper[4847]: I0218 00:44:55.005098 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" event={"ID":"82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e","Type":"ContainerStarted","Data":"300d9ea512c9f05a7b60c65717705b7167633fadd91fb43d897acf16c113d785"} Feb 18 00:44:55 crc kubenswrapper[4847]: I0218 00:44:55.007321 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" event={"ID":"8fcd7de2-15f8-4d01-8535-296fb3d8de65","Type":"ContainerStarted","Data":"d1ab1cb667c987eaed3c17f320f30d81c1197280f0b9bf8e96801f7da142f479"} Feb 18 00:44:55 crc kubenswrapper[4847]: E0218 00:44:55.008824 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" podUID="82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e" Feb 18 00:44:55 crc kubenswrapper[4847]: I0218 00:44:55.810821 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:55 crc kubenswrapper[4847]: E0218 00:44:55.811077 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:55 crc kubenswrapper[4847]: E0218 00:44:55.811123 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:44:59.811110292 +0000 UTC m=+1173.188461234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.022163 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" podUID="82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.022183 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" podUID="5aee4f12-aa12-4168-bc5b-ad6408c5e8d8" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.022251 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" podUID="2726117a-e40a-4a65-b290-404c27c71101" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.022690 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" podUID="5cb3848f-23f4-4037-876f-e390daafc3ba" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.022746 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podUID="594a9f71-f227-40eb-89ab-a9f661a63e3a" Feb 18 00:44:56 crc kubenswrapper[4847]: I0218 00:44:56.216694 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.217196 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.217292 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:45:00.217231494 +0000 UTC m=+1173.594582436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: I0218 00:44:56.521339 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:56 crc kubenswrapper[4847]: I0218 00:44:56.521482 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.521625 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.521671 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:00.521655732 +0000 UTC m=+1173.899006674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.521956 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:44:56 crc kubenswrapper[4847]: E0218 00:44:56.521988 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:00.52198082 +0000 UTC m=+1173.899331762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:44:59 crc kubenswrapper[4847]: I0218 00:44:59.879662 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:44:59 crc kubenswrapper[4847]: E0218 00:44:59.879854 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:44:59 crc kubenswrapper[4847]: E0218 00:44:59.880416 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:45:07.880395023 +0000 UTC m=+1181.257745965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.152354 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m"] Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.153695 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.156431 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.161902 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.171200 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m"] Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.286842 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.286897 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjh7\" (UniqueName: \"kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.287114 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.287424 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.287539 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.287634 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:45:08.287594911 +0000 UTC m=+1181.664945863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.389109 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.389526 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.389651 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjh7\" (UniqueName: \"kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.390756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.408699 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.422555 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjh7\" (UniqueName: \"kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7\") pod \"collect-profiles-29522925-48n2m\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.488115 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.593520 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.593624 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.593682 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:08.593668417 +0000 UTC m=+1181.971019359 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: I0218 00:45:00.593853 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.594067 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:45:00 crc kubenswrapper[4847]: E0218 00:45:00.594180 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:08.594154249 +0000 UTC m=+1181.971505201 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:45:05 crc kubenswrapper[4847]: E0218 00:45:05.762195 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" Feb 18 00:45:05 crc kubenswrapper[4847]: E0218 00:45:05.762893 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sk278,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-cpzb6_openstack-operators(c63bde24-5850-4ef7-abba-00b22064d1c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:45:05 crc kubenswrapper[4847]: E0218 00:45:05.764440 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" podUID="c63bde24-5850-4ef7-abba-00b22064d1c7" Feb 18 00:45:06 crc kubenswrapper[4847]: E0218 00:45:06.102551 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" podUID="c63bde24-5850-4ef7-abba-00b22064d1c7" Feb 18 00:45:07 crc kubenswrapper[4847]: I0218 00:45:07.914915 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:07 crc kubenswrapper[4847]: E0218 00:45:07.915114 4847 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 18 00:45:07 crc kubenswrapper[4847]: E0218 00:45:07.915452 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert podName:22395d35-6b40-4f53-b3ca-dced6abd4b13 nodeName:}" failed. No retries permitted until 2026-02-18 00:45:23.915424416 +0000 UTC m=+1197.292775598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert") pod "infra-operator-controller-manager-79d975b745-4g2zb" (UID: "22395d35-6b40-4f53-b3ca-dced6abd4b13") : secret "infra-operator-webhook-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: I0218 00:45:08.324401 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.324766 4847 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.325161 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert podName:96061780-bc78-49b0-b23d-2118927130c4 nodeName:}" failed. No retries permitted until 2026-02-18 00:45:24.325120812 +0000 UTC m=+1197.702471784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" (UID: "96061780-bc78-49b0-b23d-2118927130c4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: I0218 00:45:08.630216 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:08 crc kubenswrapper[4847]: I0218 00:45:08.630502 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.630369 4847 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.630632 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:24.630593585 +0000 UTC m=+1198.007944527 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "webhook-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.630677 4847 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 18 00:45:08 crc kubenswrapper[4847]: E0218 00:45:08.630728 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs podName:6bb1820a-9449-4f74-8523-ee747951291d nodeName:}" failed. No retries permitted until 2026-02-18 00:45:24.630714537 +0000 UTC m=+1198.008065479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs") pod "openstack-operator-controller-manager-6994859df4-mcksc" (UID: "6bb1820a-9449-4f74-8523-ee747951291d") : secret "metrics-server-cert" not found Feb 18 00:45:09 crc kubenswrapper[4847]: E0218 00:45:09.163926 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 18 00:45:09 crc kubenswrapper[4847]: E0218 00:45:09.164132 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wt675,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-68zsz_openstack-operators(20119aa4-b1ef-4ac7-9b93-af64593b22b3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:45:09 crc kubenswrapper[4847]: E0218 00:45:09.165326 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" podUID="20119aa4-b1ef-4ac7-9b93-af64593b22b3" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.139092 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" event={"ID":"c0bb6956-fedb-40ce-9d87-3fa43b468103","Type":"ContainerStarted","Data":"134255bc3e0bbfce9bdf536e2b7a6df1d1bf57494eb54659146db9d0eef8df4e"} Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.139853 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.142746 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" event={"ID":"0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb","Type":"ContainerStarted","Data":"d01182a16bef96ca41f0c870fae77edeb4f258795df6ab857c1d1fd554337be4"} Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.143395 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.150403 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" event={"ID":"5080d582-df48-411d-ae00-57bb214b3fb1","Type":"ContainerStarted","Data":"46f9f4a1010913d9ba623bbf7596eed3ebf80dc07b0747ebc724d84410ea44e4"} Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.150529 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:45:10 crc kubenswrapper[4847]: E0218 00:45:10.154766 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" podUID="20119aa4-b1ef-4ac7-9b93-af64593b22b3" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.161754 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m"] Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.161794 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" podStartSLOduration=2.782665042 podStartE2EDuration="19.161785016s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.329064553 +0000 UTC m=+1166.706415495" lastFinishedPulling="2026-02-18 00:45:09.708184527 +0000 UTC m=+1183.085535469" observedRunningTime="2026-02-18 00:45:10.154832222 +0000 UTC m=+1183.532183164" watchObservedRunningTime="2026-02-18 00:45:10.161785016 +0000 UTC m=+1183.539135948" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.209259 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" podStartSLOduration=4.666350503 podStartE2EDuration="19.209238803s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.037517538 +0000 UTC m=+1166.414868480" lastFinishedPulling="2026-02-18 00:45:07.580405808 +0000 UTC m=+1180.957756780" observedRunningTime="2026-02-18 00:45:10.172903977 +0000 UTC m=+1183.550254919" watchObservedRunningTime="2026-02-18 00:45:10.209238803 +0000 UTC m=+1183.586589745" Feb 18 00:45:10 crc kubenswrapper[4847]: I0218 00:45:10.234921 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" podStartSLOduration=2.855990268 podStartE2EDuration="19.234900437s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.329328159 +0000 UTC m=+1166.706679101" lastFinishedPulling="2026-02-18 00:45:09.708238328 +0000 UTC m=+1183.085589270" observedRunningTime="2026-02-18 00:45:10.229853928 +0000 UTC m=+1183.607204870" watchObservedRunningTime="2026-02-18 00:45:10.234900437 +0000 UTC m=+1183.612251379" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.186994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" event={"ID":"a93522f3-c6ff-46fb-ab96-0af205914e2f","Type":"ContainerStarted","Data":"a917789deadfe400fe3e368a7a6b95d58370f7813da92a72052e0cff3f836571"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.188126 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.189679 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" event={"ID":"bfdfed12-2cd6-4adc-b953-83d17460c270","Type":"ContainerStarted","Data":"6ecac68f5a662cbf2d0026ae5bf31cbeee91a8716eb25cc92b1864b972c00710"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.195241 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.210802 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" event={"ID":"6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33","Type":"ContainerStarted","Data":"75305e0b43448f2c3fbce48ecdb2264f7629a0f188e47c28043dc143e64e47bd"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.211594 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.225967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" event={"ID":"8fcd7de2-15f8-4d01-8535-296fb3d8de65","Type":"ContainerStarted","Data":"39f5f18a07f0c268909bbacdca01a22f443724f35a40dbfc36c458682d41ee11"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.226273 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.242394 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" event={"ID":"eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02","Type":"ContainerStarted","Data":"d2e3d97a6376c88525ac6afa416e8efdeeb25d222dc17e57bd32d6fb3e69ebab"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.243266 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.252054 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" event={"ID":"082203f6-e5fd-4dd3-8b94-2a46247155d9","Type":"ContainerStarted","Data":"fe7d2c65532319219ce578af8540df0791e2ba292289e09ca28886c505c98d49"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.252765 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.257166 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" podStartSLOduration=5.491060091 podStartE2EDuration="20.257145196s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.807499417 +0000 UTC m=+1167.184850349" lastFinishedPulling="2026-02-18 00:45:08.573584512 +0000 UTC m=+1181.950935454" observedRunningTime="2026-02-18 00:45:11.250035629 +0000 UTC m=+1184.627386571" watchObservedRunningTime="2026-02-18 00:45:11.257145196 +0000 UTC m=+1184.634496138" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.259467 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" event={"ID":"2011530e-7707-49e4-b5a7-f7867a3b57bb","Type":"ContainerStarted","Data":"935d10759f617c3c16be97d67c5f8be33850d2b8a7ef948ed5ad66e297006405"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.259518 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" event={"ID":"2011530e-7707-49e4-b5a7-f7867a3b57bb","Type":"ContainerStarted","Data":"c967c68ff01103117204856f013af2207e22fb84c61441ae323535aaf04fc412"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.279762 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" event={"ID":"eb9dda88-61d8-471e-8f59-1f6918e048d0","Type":"ContainerStarted","Data":"e0ff56f46464d36c317e9c01ffb910e6ab5940ffcbaa6bff20e20f3f020aebff"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.279797 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.295931 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" event={"ID":"a36027cb-b3fc-45b7-bcef-75e9b7743594","Type":"ContainerStarted","Data":"9625deda3b06a0d70b07a58cf724dd6890472d02b3a3e01f9964d56f45366515"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.297045 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.309279 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" podStartSLOduration=3.759581218 podStartE2EDuration="19.309260243s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.165978798 +0000 UTC m=+1167.543329740" lastFinishedPulling="2026-02-18 00:45:09.715657823 +0000 UTC m=+1183.093008765" observedRunningTime="2026-02-18 00:45:11.292934339 +0000 UTC m=+1184.670285281" watchObservedRunningTime="2026-02-18 00:45:11.309260243 +0000 UTC m=+1184.686611185" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.314873 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" event={"ID":"5aee4f12-aa12-4168-bc5b-ad6408c5e8d8","Type":"ContainerStarted","Data":"a6753f93abbbf43573f961df96e3705aa4f74ad1d3935e6f28d25bd40ae72330"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.315669 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.339669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" event={"ID":"a7705c91-5ed6-4a64-b9a1-06af4d223613","Type":"ContainerStarted","Data":"971fb5547392c672d308cb89e482602c37c4bd36f8cb3b354f753883f516815d"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.340754 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.371177 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" event={"ID":"67706a3a-2985-42f6-9820-21bf9abc77fc","Type":"ContainerStarted","Data":"0c85c81e646db8df3cce0d4b9d93e0578a337f7db4e56c77ff089f30fb8fc838"} Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.390472 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" podStartSLOduration=4.097110681 podStartE2EDuration="20.390456665s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.427568902 +0000 UTC m=+1166.804919844" lastFinishedPulling="2026-02-18 00:45:09.720914896 +0000 UTC m=+1183.098265828" observedRunningTime="2026-02-18 00:45:11.330405671 +0000 UTC m=+1184.707756613" watchObservedRunningTime="2026-02-18 00:45:11.390456665 +0000 UTC m=+1184.767807607" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.424341 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" podStartSLOduration=4.500261762 podStartE2EDuration="20.424326632s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.79529402 +0000 UTC m=+1167.172644962" lastFinishedPulling="2026-02-18 00:45:09.71935889 +0000 UTC m=+1183.096709832" observedRunningTime="2026-02-18 00:45:11.420810569 +0000 UTC m=+1184.798161511" watchObservedRunningTime="2026-02-18 00:45:11.424326632 +0000 UTC m=+1184.801677574" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.426837 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" podStartSLOduration=4.431857597 podStartE2EDuration="19.426831291s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.145904045 +0000 UTC m=+1167.523254977" lastFinishedPulling="2026-02-18 00:45:09.140877729 +0000 UTC m=+1182.518228671" observedRunningTime="2026-02-18 00:45:11.391923899 +0000 UTC m=+1184.769274841" watchObservedRunningTime="2026-02-18 00:45:11.426831291 +0000 UTC m=+1184.804182233" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.495929 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" podStartSLOduration=5.714659641 podStartE2EDuration="19.495912208s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.799184772 +0000 UTC m=+1167.176535714" lastFinishedPulling="2026-02-18 00:45:07.580437339 +0000 UTC m=+1180.957788281" observedRunningTime="2026-02-18 00:45:11.456875469 +0000 UTC m=+1184.834226411" watchObservedRunningTime="2026-02-18 00:45:11.495912208 +0000 UTC m=+1184.873263150" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.496022 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" podStartSLOduration=11.49601725 podStartE2EDuration="11.49601725s" podCreationTimestamp="2026-02-18 00:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:45:11.49048859 +0000 UTC m=+1184.867839532" watchObservedRunningTime="2026-02-18 00:45:11.49601725 +0000 UTC m=+1184.873368192" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.524806 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" podStartSLOduration=4.626961306 podStartE2EDuration="20.524788768s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.809341661 +0000 UTC m=+1167.186692603" lastFinishedPulling="2026-02-18 00:45:09.707169123 +0000 UTC m=+1183.084520065" observedRunningTime="2026-02-18 00:45:11.520056366 +0000 UTC m=+1184.897407318" watchObservedRunningTime="2026-02-18 00:45:11.524788768 +0000 UTC m=+1184.902139710" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.554014 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" podStartSLOduration=4.575596757 podStartE2EDuration="20.553982335s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.738542904 +0000 UTC m=+1167.115893846" lastFinishedPulling="2026-02-18 00:45:09.716928482 +0000 UTC m=+1183.094279424" observedRunningTime="2026-02-18 00:45:11.551809904 +0000 UTC m=+1184.929160846" watchObservedRunningTime="2026-02-18 00:45:11.553982335 +0000 UTC m=+1184.931333277" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.571142 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" podStartSLOduration=4.009984884 podStartE2EDuration="19.571126379s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.154518718 +0000 UTC m=+1167.531869660" lastFinishedPulling="2026-02-18 00:45:09.715660213 +0000 UTC m=+1183.093011155" observedRunningTime="2026-02-18 00:45:11.570461193 +0000 UTC m=+1184.947812135" watchObservedRunningTime="2026-02-18 00:45:11.571126379 +0000 UTC m=+1184.948477321" Feb 18 00:45:11 crc kubenswrapper[4847]: I0218 00:45:11.607367 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" podStartSLOduration=4.685049764 podStartE2EDuration="20.607352972s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.786514033 +0000 UTC m=+1167.163864975" lastFinishedPulling="2026-02-18 00:45:09.708817241 +0000 UTC m=+1183.086168183" observedRunningTime="2026-02-18 00:45:11.599701892 +0000 UTC m=+1184.977052824" watchObservedRunningTime="2026-02-18 00:45:11.607352972 +0000 UTC m=+1184.984703914" Feb 18 00:45:12 crc kubenswrapper[4847]: I0218 00:45:12.394678 4847 generic.go:334] "Generic (PLEG): container finished" podID="2011530e-7707-49e4-b5a7-f7867a3b57bb" containerID="935d10759f617c3c16be97d67c5f8be33850d2b8a7ef948ed5ad66e297006405" exitCode=0 Feb 18 00:45:12 crc kubenswrapper[4847]: I0218 00:45:12.395138 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" event={"ID":"2011530e-7707-49e4-b5a7-f7867a3b57bb","Type":"ContainerDied","Data":"935d10759f617c3c16be97d67c5f8be33850d2b8a7ef948ed5ad66e297006405"} Feb 18 00:45:12 crc kubenswrapper[4847]: I0218 00:45:12.395914 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:45:12 crc kubenswrapper[4847]: I0218 00:45:12.418932 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" podStartSLOduration=4.684842674 podStartE2EDuration="20.41890404s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.192320608 +0000 UTC m=+1167.569671550" lastFinishedPulling="2026-02-18 00:45:09.926381974 +0000 UTC m=+1183.303732916" observedRunningTime="2026-02-18 00:45:11.617517301 +0000 UTC m=+1184.994868243" watchObservedRunningTime="2026-02-18 00:45:12.41890404 +0000 UTC m=+1185.796254982" Feb 18 00:45:20 crc kubenswrapper[4847]: I0218 00:45:20.961788 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.100263 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgjh7\" (UniqueName: \"kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7\") pod \"2011530e-7707-49e4-b5a7-f7867a3b57bb\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.100338 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume\") pod \"2011530e-7707-49e4-b5a7-f7867a3b57bb\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.100395 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume\") pod \"2011530e-7707-49e4-b5a7-f7867a3b57bb\" (UID: \"2011530e-7707-49e4-b5a7-f7867a3b57bb\") " Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.101184 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume" (OuterVolumeSpecName: "config-volume") pod "2011530e-7707-49e4-b5a7-f7867a3b57bb" (UID: "2011530e-7707-49e4-b5a7-f7867a3b57bb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.106621 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2011530e-7707-49e4-b5a7-f7867a3b57bb" (UID: "2011530e-7707-49e4-b5a7-f7867a3b57bb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.107805 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7" (OuterVolumeSpecName: "kube-api-access-pgjh7") pod "2011530e-7707-49e4-b5a7-f7867a3b57bb" (UID: "2011530e-7707-49e4-b5a7-f7867a3b57bb"). InnerVolumeSpecName "kube-api-access-pgjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.202749 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2011530e-7707-49e4-b5a7-f7867a3b57bb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.202782 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgjh7\" (UniqueName: \"kubernetes.io/projected/2011530e-7707-49e4-b5a7-f7867a3b57bb-kube-api-access-pgjh7\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.202794 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2011530e-7707-49e4-b5a7-f7867a3b57bb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.479640 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" event={"ID":"2011530e-7707-49e4-b5a7-f7867a3b57bb","Type":"ContainerDied","Data":"c967c68ff01103117204856f013af2207e22fb84c61441ae323535aaf04fc412"} Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.479901 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c967c68ff01103117204856f013af2207e22fb84c61441ae323535aaf04fc412" Feb 18 00:45:21 crc kubenswrapper[4847]: I0218 00:45:21.479720 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m" Feb 18 00:45:22 crc kubenswrapper[4847]: E0218 00:45:22.173357 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 18 00:45:22 crc kubenswrapper[4847]: E0218 00:45:22.173682 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcg4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-z9kpc_openstack-operators(594a9f71-f227-40eb-89ab-a9f661a63e3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:45:22 crc kubenswrapper[4847]: E0218 00:45:22.175739 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podUID="594a9f71-f227-40eb-89ab-a9f661a63e3a" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.187145 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-njjgx" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.188480 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-k8v2d" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.211855 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-bflmk" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.239759 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vvgv" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.270081 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xnrms" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.303990 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-t5sg8" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.405387 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-tzkvx" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.516964 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-9b7bk" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.577537 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-8gg4t" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.579155 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-zft7w" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.659008 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-4x7fq" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.781196 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-q7phq" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.848299 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-fj256" Feb 18 00:45:22 crc kubenswrapper[4847]: I0218 00:45:22.927216 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-867dw" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.517669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" event={"ID":"5cb3848f-23f4-4037-876f-e390daafc3ba","Type":"ContainerStarted","Data":"cdcf1c3b37d0c6f3616941c688013d206e2f37cea8a4cd1122009396f3bc5f6c"} Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.518829 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.521561 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" event={"ID":"c63bde24-5850-4ef7-abba-00b22064d1c7","Type":"ContainerStarted","Data":"033142f3008a7e8bd02bacbbe13c941f3b8a7dc4a3ba8e76bb7b9a61b3380f59"} Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.522398 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.524088 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" event={"ID":"2726117a-e40a-4a65-b290-404c27c71101","Type":"ContainerStarted","Data":"444e19a7bea47e42b8a231ef811fd9c63155bacda95e686b44c48170021570bc"} Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.524586 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.527972 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" event={"ID":"82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e","Type":"ContainerStarted","Data":"2093e1e30f4fce09e033d9ba75c17f77eafa092f1e5c9cd8cf54dfc07303018f"} Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.528155 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.535854 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" podStartSLOduration=3.539374745 podStartE2EDuration="31.535839947s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.166857359 +0000 UTC m=+1167.544208301" lastFinishedPulling="2026-02-18 00:45:22.163322561 +0000 UTC m=+1195.540673503" observedRunningTime="2026-02-18 00:45:23.532867127 +0000 UTC m=+1196.910218069" watchObservedRunningTime="2026-02-18 00:45:23.535839947 +0000 UTC m=+1196.913190889" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.551338 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" podStartSLOduration=3.527822143 podStartE2EDuration="31.551314451s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.196854105 +0000 UTC m=+1167.574205047" lastFinishedPulling="2026-02-18 00:45:22.220346413 +0000 UTC m=+1195.597697355" observedRunningTime="2026-02-18 00:45:23.547414049 +0000 UTC m=+1196.924764991" watchObservedRunningTime="2026-02-18 00:45:23.551314451 +0000 UTC m=+1196.928665403" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.581424 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" podStartSLOduration=3.5072853200000003 podStartE2EDuration="31.58140101s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.145831944 +0000 UTC m=+1167.523182886" lastFinishedPulling="2026-02-18 00:45:22.219947604 +0000 UTC m=+1195.597298576" observedRunningTime="2026-02-18 00:45:23.5635602 +0000 UTC m=+1196.940911142" watchObservedRunningTime="2026-02-18 00:45:23.58140101 +0000 UTC m=+1196.958751972" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.581900 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" podStartSLOduration=3.537004119 podStartE2EDuration="31.581894481s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.171294153 +0000 UTC m=+1167.548645095" lastFinishedPulling="2026-02-18 00:45:22.216184485 +0000 UTC m=+1195.593535457" observedRunningTime="2026-02-18 00:45:23.579588937 +0000 UTC m=+1196.956939889" watchObservedRunningTime="2026-02-18 00:45:23.581894481 +0000 UTC m=+1196.959245443" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.961964 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:23 crc kubenswrapper[4847]: I0218 00:45:23.972474 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22395d35-6b40-4f53-b3ca-dced6abd4b13-cert\") pod \"infra-operator-controller-manager-79d975b745-4g2zb\" (UID: \"22395d35-6b40-4f53-b3ca-dced6abd4b13\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.143005 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.379636 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.384096 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96061780-bc78-49b0-b23d-2118927130c4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl\" (UID: \"96061780-bc78-49b0-b23d-2118927130c4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.502248 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.683705 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.684841 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.689325 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-webhook-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.689575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6bb1820a-9449-4f74-8523-ee747951291d-metrics-certs\") pod \"openstack-operator-controller-manager-6994859df4-mcksc\" (UID: \"6bb1820a-9449-4f74-8523-ee747951291d\") " pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.713227 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.720628 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb"] Feb 18 00:45:24 crc kubenswrapper[4847]: I0218 00:45:24.947865 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl"] Feb 18 00:45:24 crc kubenswrapper[4847]: W0218 00:45:24.958561 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96061780_bc78_49b0_b23d_2118927130c4.slice/crio-49a72b4b1c4e0cbe69dee30ce64812a214458260738389c7b697656c3e91b2be WatchSource:0}: Error finding container 49a72b4b1c4e0cbe69dee30ce64812a214458260738389c7b697656c3e91b2be: Status 404 returned error can't find the container with id 49a72b4b1c4e0cbe69dee30ce64812a214458260738389c7b697656c3e91b2be Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.018986 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc"] Feb 18 00:45:25 crc kubenswrapper[4847]: W0218 00:45:25.042705 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bb1820a_9449_4f74_8523_ee747951291d.slice/crio-251241b5d62ca36eb6575f6e25ad2f6cadd8700520e14739dc7967a4882a3e24 WatchSource:0}: Error finding container 251241b5d62ca36eb6575f6e25ad2f6cadd8700520e14739dc7967a4882a3e24: Status 404 returned error can't find the container with id 251241b5d62ca36eb6575f6e25ad2f6cadd8700520e14739dc7967a4882a3e24 Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.556050 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" event={"ID":"20119aa4-b1ef-4ac7-9b93-af64593b22b3","Type":"ContainerStarted","Data":"e946b96af3b90d5ebc08d85076437fa402c7ab994c3b12f09feada067d0af67d"} Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.556289 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.558336 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" event={"ID":"96061780-bc78-49b0-b23d-2118927130c4","Type":"ContainerStarted","Data":"49a72b4b1c4e0cbe69dee30ce64812a214458260738389c7b697656c3e91b2be"} Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.561216 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" event={"ID":"6bb1820a-9449-4f74-8523-ee747951291d","Type":"ContainerStarted","Data":"185c4045d5fbc042085b2194d3ccf8457620346ba3b407ec3a94b45642f749b6"} Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.561251 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" event={"ID":"6bb1820a-9449-4f74-8523-ee747951291d","Type":"ContainerStarted","Data":"251241b5d62ca36eb6575f6e25ad2f6cadd8700520e14739dc7967a4882a3e24"} Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.561336 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.563261 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" event={"ID":"22395d35-6b40-4f53-b3ca-dced6abd4b13","Type":"ContainerStarted","Data":"81a89bb90d07e24c9ec182177b86426f7bc8e29c48ef254eec2b8566bb2fb67e"} Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.576699 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" podStartSLOduration=3.491144944 podStartE2EDuration="34.576682248s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:44:53.736183738 +0000 UTC m=+1167.113534680" lastFinishedPulling="2026-02-18 00:45:24.821721032 +0000 UTC m=+1198.199071984" observedRunningTime="2026-02-18 00:45:25.568991247 +0000 UTC m=+1198.946342189" watchObservedRunningTime="2026-02-18 00:45:25.576682248 +0000 UTC m=+1198.954033190" Feb 18 00:45:25 crc kubenswrapper[4847]: I0218 00:45:25.594420 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" podStartSLOduration=33.594402025 podStartE2EDuration="33.594402025s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:45:25.593444453 +0000 UTC m=+1198.970795405" watchObservedRunningTime="2026-02-18 00:45:25.594402025 +0000 UTC m=+1198.971752967" Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.589624 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" event={"ID":"96061780-bc78-49b0-b23d-2118927130c4","Type":"ContainerStarted","Data":"b39ab0790704fbc4cb5fddc168dd26564892fd4e509eba0c7705bcaeddb4b1dd"} Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.590323 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.593556 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" event={"ID":"22395d35-6b40-4f53-b3ca-dced6abd4b13","Type":"ContainerStarted","Data":"111a6f54c627d2777a085d98af4cb5293f89f3f36beac4ffa7445b5dd35b2150"} Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.593902 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.624180 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" podStartSLOduration=33.920367709 podStartE2EDuration="36.624165181s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:45:24.960429408 +0000 UTC m=+1198.337780350" lastFinishedPulling="2026-02-18 00:45:27.66422684 +0000 UTC m=+1201.041577822" observedRunningTime="2026-02-18 00:45:28.619269185 +0000 UTC m=+1201.996620127" watchObservedRunningTime="2026-02-18 00:45:28.624165181 +0000 UTC m=+1202.001516123" Feb 18 00:45:28 crc kubenswrapper[4847]: I0218 00:45:28.643088 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" podStartSLOduration=34.741679601 podStartE2EDuration="37.643062865s" podCreationTimestamp="2026-02-18 00:44:51 +0000 UTC" firstStartedPulling="2026-02-18 00:45:24.740036639 +0000 UTC m=+1198.117387591" lastFinishedPulling="2026-02-18 00:45:27.641419903 +0000 UTC m=+1201.018770855" observedRunningTime="2026-02-18 00:45:28.642035841 +0000 UTC m=+1202.019386793" watchObservedRunningTime="2026-02-18 00:45:28.643062865 +0000 UTC m=+1202.020413827" Feb 18 00:45:32 crc kubenswrapper[4847]: I0218 00:45:32.468771 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-68zsz" Feb 18 00:45:32 crc kubenswrapper[4847]: I0218 00:45:32.599064 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-l2sl6" Feb 18 00:45:32 crc kubenswrapper[4847]: I0218 00:45:32.748009 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-cpzb6" Feb 18 00:45:32 crc kubenswrapper[4847]: I0218 00:45:32.860319 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-77b97c6f8f-pcgng" Feb 18 00:45:33 crc kubenswrapper[4847]: I0218 00:45:33.190030 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" Feb 18 00:45:34 crc kubenswrapper[4847]: I0218 00:45:34.151527 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-4g2zb" Feb 18 00:45:34 crc kubenswrapper[4847]: I0218 00:45:34.510773 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl" Feb 18 00:45:34 crc kubenswrapper[4847]: I0218 00:45:34.723008 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6994859df4-mcksc" Feb 18 00:45:37 crc kubenswrapper[4847]: E0218 00:45:37.435937 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podUID="594a9f71-f227-40eb-89ab-a9f661a63e3a" Feb 18 00:45:53 crc kubenswrapper[4847]: I0218 00:45:53.863531 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" event={"ID":"594a9f71-f227-40eb-89ab-a9f661a63e3a","Type":"ContainerStarted","Data":"f9a4dc4b09cffca17bd7726395b4faa27c3f4ed29b92dd1e44c0b1b1d70bf6ef"} Feb 18 00:45:53 crc kubenswrapper[4847]: I0218 00:45:53.895621 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z9kpc" podStartSLOduration=3.226507858 podStartE2EDuration="1m1.895570063s" podCreationTimestamp="2026-02-18 00:44:52 +0000 UTC" firstStartedPulling="2026-02-18 00:44:54.191914079 +0000 UTC m=+1167.569265021" lastFinishedPulling="2026-02-18 00:45:52.860976274 +0000 UTC m=+1226.238327226" observedRunningTime="2026-02-18 00:45:53.88779733 +0000 UTC m=+1227.265148312" watchObservedRunningTime="2026-02-18 00:45:53.895570063 +0000 UTC m=+1227.272921025" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.441703 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:16 crc kubenswrapper[4847]: E0218 00:46:16.442551 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2011530e-7707-49e4-b5a7-f7867a3b57bb" containerName="collect-profiles" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.442563 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2011530e-7707-49e4-b5a7-f7867a3b57bb" containerName="collect-profiles" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.442777 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2011530e-7707-49e4-b5a7-f7867a3b57bb" containerName="collect-profiles" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.443592 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.464739 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.468351 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.474420 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-m8x5m" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.475135 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.475332 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.527761 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.529091 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.540005 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.581510 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmg5\" (UniqueName: \"kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.581737 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.599438 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.685291 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmg5\" (UniqueName: \"kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.685389 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gxn\" (UniqueName: \"kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.685446 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.685537 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.685567 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.687105 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.707796 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmg5\" (UniqueName: \"kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5\") pod \"dnsmasq-dns-675f4bcbfc-dgbg7\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.767864 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.786645 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gxn\" (UniqueName: \"kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.786775 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.786805 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.787843 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.789258 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.804430 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gxn\" (UniqueName: \"kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn\") pod \"dnsmasq-dns-78dd6ddcc-lfjzx\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:16 crc kubenswrapper[4847]: I0218 00:46:16.884957 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:17 crc kubenswrapper[4847]: I0218 00:46:17.246911 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:17 crc kubenswrapper[4847]: I0218 00:46:17.380082 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:17 crc kubenswrapper[4847]: W0218 00:46:17.380657 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f78ef84_5dbb_4076_a834_d990c03c9b57.slice/crio-0f55b663e5b0fb8d2d8b72e2cb1606bce6eef6c1599db80421dee10e8f12b4a1 WatchSource:0}: Error finding container 0f55b663e5b0fb8d2d8b72e2cb1606bce6eef6c1599db80421dee10e8f12b4a1: Status 404 returned error can't find the container with id 0f55b663e5b0fb8d2d8b72e2cb1606bce6eef6c1599db80421dee10e8f12b4a1 Feb 18 00:46:18 crc kubenswrapper[4847]: I0218 00:46:18.090736 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" event={"ID":"4f78ef84-5dbb-4076-a834-d990c03c9b57","Type":"ContainerStarted","Data":"0f55b663e5b0fb8d2d8b72e2cb1606bce6eef6c1599db80421dee10e8f12b4a1"} Feb 18 00:46:18 crc kubenswrapper[4847]: I0218 00:46:18.092227 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" event={"ID":"cb606413-c8e6-4b40-8073-24934cc0be3b","Type":"ContainerStarted","Data":"df80d3ffb548714e0a4659b3984b26222df0603c482816bbcf78e97b157272ff"} Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.228384 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.264803 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.266058 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.280957 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.336447 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.336523 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hbc\" (UniqueName: \"kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.336822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.443213 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hbc\" (UniqueName: \"kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.443318 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.443373 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.446378 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.456850 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.479465 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hbc\" (UniqueName: \"kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc\") pod \"dnsmasq-dns-666b6646f7-8946t\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.562626 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.593216 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.601881 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.603666 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.614842 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.748313 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z5dh\" (UniqueName: \"kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.748529 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.748730 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.850344 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z5dh\" (UniqueName: \"kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.850682 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.850743 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.852404 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.854170 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:19 crc kubenswrapper[4847]: I0218 00:46:19.866949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z5dh\" (UniqueName: \"kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh\") pod \"dnsmasq-dns-57d769cc4f-79d5g\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.001264 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.070740 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.142056 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8946t" event={"ID":"f1b86276-e92b-4785-9d34-b66040d0ece2","Type":"ContainerStarted","Data":"ada2155ed20ec3bb1da1aa7bb927cf541ab2ff087c7a02faaff3a846ddc357b7"} Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.396067 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.398648 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.401803 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.402160 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.401987 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.402331 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.402039 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.402075 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.402449 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9s2h" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.403485 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461438 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461476 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461478 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461830 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461957 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.461987 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462030 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfpz\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462173 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462276 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462350 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462381 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.462581 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: W0218 00:46:20.464878 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1213dcf5_90b2_4824_865c_baf5f7646ebc.slice/crio-a1293589e8deb243e5eac7c1a00adcfd23859596e9dd393e60e43b95e2bd8334 WatchSource:0}: Error finding container a1293589e8deb243e5eac7c1a00adcfd23859596e9dd393e60e43b95e2bd8334: Status 404 returned error can't find the container with id a1293589e8deb243e5eac7c1a00adcfd23859596e9dd393e60e43b95e2bd8334 Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564400 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564466 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564519 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564549 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564570 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564597 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfpz\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564638 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564676 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564711 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564734 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.564794 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.565316 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.565502 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.566217 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.566256 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.566504 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.567685 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.574160 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.574575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.575113 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.584534 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfpz\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.591513 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.610769 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.726524 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.742053 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.746442 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.757906 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.762987 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763029 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763319 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qnvvw" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763430 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763522 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763631 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.763775 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.892903 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.892952 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.892978 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.892997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893034 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893053 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893072 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893640 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893741 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893806 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.893884 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnf2\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995370 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995785 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995857 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995924 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnnf2\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995953 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.995985 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996014 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996057 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996092 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996109 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996129 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996911 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.996985 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:20 crc kubenswrapper[4847]: I0218 00:46:20.997385 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:20.997692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:20.997779 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:20.997813 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.002301 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.009143 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.009489 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.016365 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnnf2\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.018763 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.018919 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.108831 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.165680 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" event={"ID":"1213dcf5-90b2-4824-865c-baf5f7646ebc","Type":"ContainerStarted","Data":"a1293589e8deb243e5eac7c1a00adcfd23859596e9dd393e60e43b95e2bd8334"} Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.293945 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.906935 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.985660 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.987111 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.987224 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.989776 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.990974 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.991342 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mcw59" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.994328 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 18 00:46:21 crc kubenswrapper[4847]: I0218 00:46:21.999861 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025224 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025264 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kolla-config\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025305 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzmmz\" (UniqueName: \"kubernetes.io/projected/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kube-api-access-lzmmz\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025331 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025351 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025387 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-default\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025429 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.025454 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127595 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127672 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kolla-config\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127742 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzmmz\" (UniqueName: \"kubernetes.io/projected/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kube-api-access-lzmmz\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127767 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127788 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127827 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-default\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127873 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.127906 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.130554 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.130811 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.131454 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kolla-config\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.132085 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-config-data-default\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.132233 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.152191 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzmmz\" (UniqueName: \"kubernetes.io/projected/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-kube-api-access-lzmmz\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.153348 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.153985 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.156160 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72\") " pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.175276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerStarted","Data":"9b35fc296847a7d807fb2aac46813feba648402ca15b83acc1edd79eaab3903a"} Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.176067 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerStarted","Data":"4ccbcecacfb9a51bcb7fb2da73c21f4c45da56444eecabb0c00d34518f0e2f18"} Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.317741 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 18 00:46:22 crc kubenswrapper[4847]: I0218 00:46:22.870339 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.186591 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72","Type":"ContainerStarted","Data":"fa2508f3d43aab7f9ee50eaf296fca353fa75db0ae2d14aec07e7e7f4e9efaa6"} Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.356740 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.358680 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.360658 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.360866 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zcprn" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.360982 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.367018 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.418258 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.495452 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.495505 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572574 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572642 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572675 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572731 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572757 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572796 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.572901 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqnc\" (UniqueName: \"kubernetes.io/projected/109c4d3d-c276-45ed-93d2-d1414e156fb9-kube-api-access-jdqnc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674233 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqnc\" (UniqueName: \"kubernetes.io/projected/109c4d3d-c276-45ed-93d2-d1414e156fb9-kube-api-access-jdqnc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674308 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674338 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674364 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674404 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674427 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674453 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.674478 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.675010 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.675077 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.675299 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.676311 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.676405 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109c4d3d-c276-45ed-93d2-d1414e156fb9-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.680051 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.691291 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109c4d3d-c276-45ed-93d2-d1414e156fb9-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.709162 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.722772 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqnc\" (UniqueName: \"kubernetes.io/projected/109c4d3d-c276-45ed-93d2-d1414e156fb9-kube-api-access-jdqnc\") pod \"openstack-cell1-galera-0\" (UID: \"109c4d3d-c276-45ed-93d2-d1414e156fb9\") " pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.781694 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.786472 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.787711 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.805284 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.805367 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.805718 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mg6rb" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.828028 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.881348 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-kolla-config\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.881407 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6n2\" (UniqueName: \"kubernetes.io/projected/085dadd3-8aae-4c94-84e4-6289f1e537e1-kube-api-access-js6n2\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.881428 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.881464 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.881489 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-config-data\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.983512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-kolla-config\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.983572 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6n2\" (UniqueName: \"kubernetes.io/projected/085dadd3-8aae-4c94-84e4-6289f1e537e1-kube-api-access-js6n2\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.983617 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.983641 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.983668 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-config-data\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.984438 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-config-data\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.984979 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/085dadd3-8aae-4c94-84e4-6289f1e537e1-kolla-config\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.987886 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:23 crc kubenswrapper[4847]: I0218 00:46:23.988436 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/085dadd3-8aae-4c94-84e4-6289f1e537e1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:24 crc kubenswrapper[4847]: I0218 00:46:24.000700 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6n2\" (UniqueName: \"kubernetes.io/projected/085dadd3-8aae-4c94-84e4-6289f1e537e1-kube-api-access-js6n2\") pod \"memcached-0\" (UID: \"085dadd3-8aae-4c94-84e4-6289f1e537e1\") " pod="openstack/memcached-0" Feb 18 00:46:24 crc kubenswrapper[4847]: I0218 00:46:24.130960 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.243816 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.245308 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.248036 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kd6jm" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.254874 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.340038 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869bw\" (UniqueName: \"kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw\") pod \"kube-state-metrics-0\" (UID: \"0375fa1c-b349-44b5-8ba6-1d1afe1715ce\") " pod="openstack/kube-state-metrics-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.441251 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869bw\" (UniqueName: \"kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw\") pod \"kube-state-metrics-0\" (UID: \"0375fa1c-b349-44b5-8ba6-1d1afe1715ce\") " pod="openstack/kube-state-metrics-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.461391 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869bw\" (UniqueName: \"kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw\") pod \"kube-state-metrics-0\" (UID: \"0375fa1c-b349-44b5-8ba6-1d1afe1715ce\") " pod="openstack/kube-state-metrics-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.579926 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.794237 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl"] Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.800280 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.805921 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.806384 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-snrfw" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.813696 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl"] Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.955071 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b217-447f-4789-8ef4-fa117075737c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:26 crc kubenswrapper[4847]: I0218 00:46:26.955109 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dccx\" (UniqueName: \"kubernetes.io/projected/8095b217-447f-4789-8ef4-fa117075737c-kube-api-access-6dccx\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.057013 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dccx\" (UniqueName: \"kubernetes.io/projected/8095b217-447f-4789-8ef4-fa117075737c-kube-api-access-6dccx\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.057399 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b217-447f-4789-8ef4-fa117075737c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.061490 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8095b217-447f-4789-8ef4-fa117075737c-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.076186 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dccx\" (UniqueName: \"kubernetes.io/projected/8095b217-447f-4789-8ef4-fa117075737c-kube-api-access-6dccx\") pod \"observability-ui-dashboards-66cbf594b5-25tvl\" (UID: \"8095b217-447f-4789-8ef4-fa117075737c\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.104943 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7557c96bc4-9fcqd"] Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.106083 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.129734 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7557c96bc4-9fcqd"] Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.131592 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261525 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-oauth-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261585 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-console-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261707 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-trusted-ca-bundle\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261732 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261758 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-oauth-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261783 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-service-ca\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.261851 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6st7\" (UniqueName: \"kubernetes.io/projected/51d1ffce-ce81-406d-a18a-b8d134c67914-kube-api-access-m6st7\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.273578 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.276763 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.278978 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279164 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279189 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279360 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279384 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279390 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gh5vq" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.279420 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.295303 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.296016 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367002 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367070 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367100 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367128 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367163 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367188 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367259 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97sh\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367310 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-oauth-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367364 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-console-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367399 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367421 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-trusted-ca-bundle\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367450 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367515 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-oauth-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.367552 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.371445 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.371756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51d1ffce-ce81-406d-a18a-b8d134c67914-console-oauth-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.372165 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-service-ca\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.372186 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-console-config\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.372198 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.372814 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-service-ca\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.373067 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6st7\" (UniqueName: \"kubernetes.io/projected/51d1ffce-ce81-406d-a18a-b8d134c67914-kube-api-access-m6st7\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.373510 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-oauth-serving-cert\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.374356 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51d1ffce-ce81-406d-a18a-b8d134c67914-trusted-ca-bundle\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.446265 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6st7\" (UniqueName: \"kubernetes.io/projected/51d1ffce-ce81-406d-a18a-b8d134c67914-kube-api-access-m6st7\") pod \"console-7557c96bc4-9fcqd\" (UID: \"51d1ffce-ce81-406d-a18a-b8d134c67914\") " pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.454132 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481521 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481567 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481592 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481620 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481642 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481660 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481679 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97sh\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481744 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481781 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.481804 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.482741 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.488234 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.488394 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.488510 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.489197 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.489316 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.489425 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.498180 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.499265 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.499402 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.500136 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.505563 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.508141 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.515372 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.517632 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.519548 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.522768 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97sh\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.535079 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.620471 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gh5vq" Feb 18 00:46:27 crc kubenswrapper[4847]: I0218 00:46:27.630421 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.081210 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xh6ft"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.082828 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.085665 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.112710 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-h5k8p"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.112940 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.113183 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-nbrsv" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.118775 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.125411 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xh6ft"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.221319 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h5k8p"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222770 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6v9z\" (UniqueName: \"kubernetes.io/projected/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-kube-api-access-h6v9z\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222814 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222841 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-lib\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222868 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq89w\" (UniqueName: \"kubernetes.io/projected/2801a17e-6108-4ffe-9eac-7068b93707e1-kube-api-access-sq89w\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222922 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-ovn-controller-tls-certs\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222951 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222971 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-scripts\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.222986 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-log\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.223015 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-log-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.223039 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2801a17e-6108-4ffe-9eac-7068b93707e1-scripts\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.223060 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-run\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.223090 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-etc-ovs\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.223110 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-combined-ca-bundle\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.246650 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.248028 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.254391 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.256220 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jkhp7" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.256252 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.257886 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.257956 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.265243 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325025 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325083 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-scripts\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325105 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-log\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325130 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-log-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325155 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325173 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325199 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325218 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2801a17e-6108-4ffe-9eac-7068b93707e1-scripts\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325236 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325256 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-run\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325286 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnh7q\" (UniqueName: \"kubernetes.io/projected/b50a5134-eeac-410c-8f07-b6a4c141386e-kube-api-access-pnh7q\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325306 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-etc-ovs\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325331 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-combined-ca-bundle\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325357 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6v9z\" (UniqueName: \"kubernetes.io/projected/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-kube-api-access-h6v9z\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325377 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325392 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325414 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-lib\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325436 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq89w\" (UniqueName: \"kubernetes.io/projected/2801a17e-6108-4ffe-9eac-7068b93707e1-kube-api-access-sq89w\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325465 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325501 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-ovn-controller-tls-certs\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.325520 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326052 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326128 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-run\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326172 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-etc-ovs\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326652 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-lib\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326803 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-log-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.326884 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-var-log\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.327098 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2801a17e-6108-4ffe-9eac-7068b93707e1-var-run-ovn\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.328484 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2801a17e-6108-4ffe-9eac-7068b93707e1-scripts\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.328765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-scripts\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.332142 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-ovn-controller-tls-certs\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.333503 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2801a17e-6108-4ffe-9eac-7068b93707e1-combined-ca-bundle\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.342373 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6v9z\" (UniqueName: \"kubernetes.io/projected/b233fc1e-4730-4c0c-bf0d-741bf86d3a19-kube-api-access-h6v9z\") pod \"ovn-controller-ovs-h5k8p\" (UID: \"b233fc1e-4730-4c0c-bf0d-741bf86d3a19\") " pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.344248 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq89w\" (UniqueName: \"kubernetes.io/projected/2801a17e-6108-4ffe-9eac-7068b93707e1-kube-api-access-sq89w\") pod \"ovn-controller-xh6ft\" (UID: \"2801a17e-6108-4ffe-9eac-7068b93707e1\") " pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427515 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427567 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427619 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnh7q\" (UniqueName: \"kubernetes.io/projected/b50a5134-eeac-410c-8f07-b6a4c141386e-kube-api-access-pnh7q\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427668 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427711 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427755 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427797 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.427813 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.428088 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.428201 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.428660 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-config\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.429038 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b50a5134-eeac-410c-8f07-b6a4c141386e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.431282 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.434156 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.436419 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b50a5134-eeac-410c-8f07-b6a4c141386e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.445677 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnh7q\" (UniqueName: \"kubernetes.io/projected/b50a5134-eeac-410c-8f07-b6a4c141386e-kube-api-access-pnh7q\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.450522 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"b50a5134-eeac-410c-8f07-b6a4c141386e\") " pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.475964 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.503723 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:29 crc kubenswrapper[4847]: I0218 00:46:29.565157 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.872317 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.876420 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.878540 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.878893 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.879114 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.879263 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xdtxs" Feb 18 00:46:32 crc kubenswrapper[4847]: I0218 00:46:32.886891 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003151 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003216 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003344 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003527 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwvp4\" (UniqueName: \"kubernetes.io/projected/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-kube-api-access-fwvp4\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003553 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003581 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003737 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.003799 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.105853 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.106455 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.106693 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.107026 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.108735 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwvp4\" (UniqueName: \"kubernetes.io/projected/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-kube-api-access-fwvp4\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.108947 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.109121 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.109375 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.109573 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.109710 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.110144 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.110150 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.116118 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.116651 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.139730 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwvp4\" (UniqueName: \"kubernetes.io/projected/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-kube-api-access-fwvp4\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.144061 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbdd48eb-2162-4fc5-9d56-3e58835ac6bc-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.149865 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc\") " pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:33 crc kubenswrapper[4847]: I0218 00:46:33.207384 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:39 crc kubenswrapper[4847]: E0218 00:46:39.942682 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 18 00:46:39 crc kubenswrapper[4847]: E0218 00:46:39.943376 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbfpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(1977a705-30e5-456c-8e2c-2cd05e0325e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:46:39 crc kubenswrapper[4847]: E0218 00:46:39.944801 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" Feb 18 00:46:40 crc kubenswrapper[4847]: E0218 00:46:40.367341 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" Feb 18 00:46:40 crc kubenswrapper[4847]: E0218 00:46:40.976742 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:46:40 crc kubenswrapper[4847]: E0218 00:46:40.976903 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m4hbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-8946t_openstack(f1b86276-e92b-4785-9d34-b66040d0ece2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:46:40 crc kubenswrapper[4847]: E0218 00:46:40.978229 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-8946t" podUID="f1b86276-e92b-4785-9d34-b66040d0ece2" Feb 18 00:46:41 crc kubenswrapper[4847]: E0218 00:46:41.055164 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:46:41 crc kubenswrapper[4847]: E0218 00:46:41.055319 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzmg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dgbg7_openstack(cb606413-c8e6-4b40-8073-24934cc0be3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:46:41 crc kubenswrapper[4847]: E0218 00:46:41.057507 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" podUID="cb606413-c8e6-4b40-8073-24934cc0be3b" Feb 18 00:46:41 crc kubenswrapper[4847]: E0218 00:46:41.373114 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-8946t" podUID="f1b86276-e92b-4785-9d34-b66040d0ece2" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.370881 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.371481 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7z5dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-79d5g_openstack(1213dcf5-90b2-4824-865c-baf5f7646ebc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.372613 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" podUID="1213dcf5-90b2-4824-865c-baf5f7646ebc" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.392458 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" podUID="1213dcf5-90b2-4824-865c-baf5f7646ebc" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.466705 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.466857 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4gxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-lfjzx_openstack(4f78ef84-5dbb-4076-a834-d990c03c9b57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:46:43 crc kubenswrapper[4847]: E0218 00:46:43.468176 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" podUID="4f78ef84-5dbb-4076-a834-d990c03c9b57" Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.517114 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.615280 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzmg5\" (UniqueName: \"kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5\") pod \"cb606413-c8e6-4b40-8073-24934cc0be3b\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.615713 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config\") pod \"cb606413-c8e6-4b40-8073-24934cc0be3b\" (UID: \"cb606413-c8e6-4b40-8073-24934cc0be3b\") " Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.616273 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config" (OuterVolumeSpecName: "config") pod "cb606413-c8e6-4b40-8073-24934cc0be3b" (UID: "cb606413-c8e6-4b40-8073-24934cc0be3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.620116 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5" (OuterVolumeSpecName: "kube-api-access-wzmg5") pod "cb606413-c8e6-4b40-8073-24934cc0be3b" (UID: "cb606413-c8e6-4b40-8073-24934cc0be3b"). InnerVolumeSpecName "kube-api-access-wzmg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.718187 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzmg5\" (UniqueName: \"kubernetes.io/projected/cb606413-c8e6-4b40-8073-24934cc0be3b-kube-api-access-wzmg5\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:43 crc kubenswrapper[4847]: I0218 00:46:43.718214 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb606413-c8e6-4b40-8073-24934cc0be3b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:44 crc kubenswrapper[4847]: W0218 00:46:44.283663 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod109c4d3d_c276_45ed_93d2_d1414e156fb9.slice/crio-bca5a47ad6d81fbf5b08f63a19ddedd1aaae8a3e3e3bcedc42bc9266ee02bbd5 WatchSource:0}: Error finding container bca5a47ad6d81fbf5b08f63a19ddedd1aaae8a3e3e3bcedc42bc9266ee02bbd5: Status 404 returned error can't find the container with id bca5a47ad6d81fbf5b08f63a19ddedd1aaae8a3e3e3bcedc42bc9266ee02bbd5 Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.292712 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.318679 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.327822 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.335790 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.344555 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7557c96bc4-9fcqd"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.353161 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:46:44 crc kubenswrapper[4847]: W0218 00:46:44.376855 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1989970b_d11c_44b8_b0b7_011c8e842c1f.slice/crio-9e0ce529684c3a98f2625b3523d9295a1f3140b0889e0b08ecb6327e9f1cf4c1 WatchSource:0}: Error finding container 9e0ce529684c3a98f2625b3523d9295a1f3140b0889e0b08ecb6327e9f1cf4c1: Status 404 returned error can't find the container with id 9e0ce529684c3a98f2625b3523d9295a1f3140b0889e0b08ecb6327e9f1cf4c1 Feb 18 00:46:44 crc kubenswrapper[4847]: W0218 00:46:44.382029 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod085dadd3_8aae_4c94_84e4_6289f1e537e1.slice/crio-c109c678c440f5c8a2af7ff5c9d7d610e58334985085d3d7ef865fd001c6011b WatchSource:0}: Error finding container c109c678c440f5c8a2af7ff5c9d7d610e58334985085d3d7ef865fd001c6011b: Status 404 returned error can't find the container with id c109c678c440f5c8a2af7ff5c9d7d610e58334985085d3d7ef865fd001c6011b Feb 18 00:46:44 crc kubenswrapper[4847]: W0218 00:46:44.383780 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0375fa1c_b349_44b5_8ba6_1d1afe1715ce.slice/crio-11e51f3e9867395e9fc48fdef2d8622a8b5a2417ef35c3fa0f6a0bbd35553a19 WatchSource:0}: Error finding container 11e51f3e9867395e9fc48fdef2d8622a8b5a2417ef35c3fa0f6a0bbd35553a19: Status 404 returned error can't find the container with id 11e51f3e9867395e9fc48fdef2d8622a8b5a2417ef35c3fa0f6a0bbd35553a19 Feb 18 00:46:44 crc kubenswrapper[4847]: W0218 00:46:44.386348 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51d1ffce_ce81_406d_a18a_b8d134c67914.slice/crio-a2f03665d388d79470fb4f9fc1f4a6a1d50676f5c11195f99fece74d5de0dc7d WatchSource:0}: Error finding container a2f03665d388d79470fb4f9fc1f4a6a1d50676f5c11195f99fece74d5de0dc7d: Status 404 returned error can't find the container with id a2f03665d388d79470fb4f9fc1f4a6a1d50676f5c11195f99fece74d5de0dc7d Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.415179 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7557c96bc4-9fcqd" event={"ID":"51d1ffce-ce81-406d-a18a-b8d134c67914","Type":"ContainerStarted","Data":"a2f03665d388d79470fb4f9fc1f4a6a1d50676f5c11195f99fece74d5de0dc7d"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.417412 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"109c4d3d-c276-45ed-93d2-d1414e156fb9","Type":"ContainerStarted","Data":"bca5a47ad6d81fbf5b08f63a19ddedd1aaae8a3e3e3bcedc42bc9266ee02bbd5"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.433679 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72","Type":"ContainerStarted","Data":"c95afc7c0b71ea1ba7a1240ed14a3e19a9016fc1ac110bbc7eb5a84ad7d6bc43"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.439287 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" event={"ID":"8095b217-447f-4789-8ef4-fa117075737c","Type":"ContainerStarted","Data":"06651ad15c18636e971f13e256c6ddb1183bdd9b7b5cc86c055c92b9459ae630"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.444131 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xh6ft"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.447324 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" event={"ID":"cb606413-c8e6-4b40-8073-24934cc0be3b","Type":"ContainerDied","Data":"df80d3ffb548714e0a4659b3984b26222df0603c482816bbcf78e97b157272ff"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.447415 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dgbg7" Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.459978 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0375fa1c-b349-44b5-8ba6-1d1afe1715ce","Type":"ContainerStarted","Data":"11e51f3e9867395e9fc48fdef2d8622a8b5a2417ef35c3fa0f6a0bbd35553a19"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.463460 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"085dadd3-8aae-4c94-84e4-6289f1e537e1","Type":"ContainerStarted","Data":"c109c678c440f5c8a2af7ff5c9d7d610e58334985085d3d7ef865fd001c6011b"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.468944 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerStarted","Data":"9e0ce529684c3a98f2625b3523d9295a1f3140b0889e0b08ecb6327e9f1cf4c1"} Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.626001 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.637483 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dgbg7"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.789816 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.859764 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.943692 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config\") pod \"4f78ef84-5dbb-4076-a834-d990c03c9b57\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.943853 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gxn\" (UniqueName: \"kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn\") pod \"4f78ef84-5dbb-4076-a834-d990c03c9b57\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.943893 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc\") pod \"4f78ef84-5dbb-4076-a834-d990c03c9b57\" (UID: \"4f78ef84-5dbb-4076-a834-d990c03c9b57\") " Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.944926 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f78ef84-5dbb-4076-a834-d990c03c9b57" (UID: "4f78ef84-5dbb-4076-a834-d990c03c9b57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.945243 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config" (OuterVolumeSpecName: "config") pod "4f78ef84-5dbb-4076-a834-d990c03c9b57" (UID: "4f78ef84-5dbb-4076-a834-d990c03c9b57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:44 crc kubenswrapper[4847]: I0218 00:46:44.951838 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn" (OuterVolumeSpecName: "kube-api-access-q4gxn") pod "4f78ef84-5dbb-4076-a834-d990c03c9b57" (UID: "4f78ef84-5dbb-4076-a834-d990c03c9b57"). InnerVolumeSpecName "kube-api-access-q4gxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.046674 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gxn\" (UniqueName: \"kubernetes.io/projected/4f78ef84-5dbb-4076-a834-d990c03c9b57-kube-api-access-q4gxn\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.046742 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.046753 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f78ef84-5dbb-4076-a834-d990c03c9b57-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.417120 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb606413-c8e6-4b40-8073-24934cc0be3b" path="/var/lib/kubelet/pods/cb606413-c8e6-4b40-8073-24934cc0be3b/volumes" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.482974 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"109c4d3d-c276-45ed-93d2-d1414e156fb9","Type":"ContainerStarted","Data":"dfb2dbc887c56fa4604d4982215fe507f15b9ec4a8e1f3cbaae332bd506f4732"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.485696 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft" event={"ID":"2801a17e-6108-4ffe-9eac-7068b93707e1","Type":"ContainerStarted","Data":"ec6750a6997a262cc0aaf1b2dc768e848298487008732e3bd9dda64a566cb06f"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.488046 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" event={"ID":"4f78ef84-5dbb-4076-a834-d990c03c9b57","Type":"ContainerDied","Data":"0f55b663e5b0fb8d2d8b72e2cb1606bce6eef6c1599db80421dee10e8f12b4a1"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.488142 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-lfjzx" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.491189 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc","Type":"ContainerStarted","Data":"066fdf6df4dc03662c4b4bbd8f9a390a9640c5c21aca68efcd8088c4d9da83fc"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.496252 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerStarted","Data":"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.509650 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7557c96bc4-9fcqd" event={"ID":"51d1ffce-ce81-406d-a18a-b8d134c67914","Type":"ContainerStarted","Data":"57da9e7583d4c68c56142beb5dcafed41cb1b3ceee95701407d0c57562c238bd"} Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.519713 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h5k8p"] Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.580714 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.593645 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-lfjzx"] Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.603746 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7557c96bc4-9fcqd" podStartSLOduration=18.603723574 podStartE2EDuration="18.603723574s" podCreationTimestamp="2026-02-18 00:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:46:45.586291422 +0000 UTC m=+1278.963642374" watchObservedRunningTime="2026-02-18 00:46:45.603723574 +0000 UTC m=+1278.981074516" Feb 18 00:46:45 crc kubenswrapper[4847]: I0218 00:46:45.629162 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 18 00:46:45 crc kubenswrapper[4847]: W0218 00:46:45.709590 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb233fc1e_4730_4c0c_bf0d_741bf86d3a19.slice/crio-2e6143e0b40b22b436c6ce7bd70312a38f859f145a38dba452a7e922bd33ab99 WatchSource:0}: Error finding container 2e6143e0b40b22b436c6ce7bd70312a38f859f145a38dba452a7e922bd33ab99: Status 404 returned error can't find the container with id 2e6143e0b40b22b436c6ce7bd70312a38f859f145a38dba452a7e922bd33ab99 Feb 18 00:46:46 crc kubenswrapper[4847]: I0218 00:46:46.521441 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5k8p" event={"ID":"b233fc1e-4730-4c0c-bf0d-741bf86d3a19","Type":"ContainerStarted","Data":"2e6143e0b40b22b436c6ce7bd70312a38f859f145a38dba452a7e922bd33ab99"} Feb 18 00:46:46 crc kubenswrapper[4847]: I0218 00:46:46.523148 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b50a5134-eeac-410c-8f07-b6a4c141386e","Type":"ContainerStarted","Data":"220b25c38d20abbd63139a115269f6d6f762e8720a76f9e2383ab6e2f6cea337"} Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.423303 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f78ef84-5dbb-4076-a834-d990c03c9b57" path="/var/lib/kubelet/pods/4f78ef84-5dbb-4076-a834-d990c03c9b57/volumes" Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.454643 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.454701 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.464920 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.542046 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7557c96bc4-9fcqd" Feb 18 00:46:47 crc kubenswrapper[4847]: I0218 00:46:47.658723 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:46:48 crc kubenswrapper[4847]: I0218 00:46:48.560844 4847 generic.go:334] "Generic (PLEG): container finished" podID="109c4d3d-c276-45ed-93d2-d1414e156fb9" containerID="dfb2dbc887c56fa4604d4982215fe507f15b9ec4a8e1f3cbaae332bd506f4732" exitCode=0 Feb 18 00:46:48 crc kubenswrapper[4847]: I0218 00:46:48.560946 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"109c4d3d-c276-45ed-93d2-d1414e156fb9","Type":"ContainerDied","Data":"dfb2dbc887c56fa4604d4982215fe507f15b9ec4a8e1f3cbaae332bd506f4732"} Feb 18 00:46:48 crc kubenswrapper[4847]: I0218 00:46:48.564361 4847 generic.go:334] "Generic (PLEG): container finished" podID="5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72" containerID="c95afc7c0b71ea1ba7a1240ed14a3e19a9016fc1ac110bbc7eb5a84ad7d6bc43" exitCode=0 Feb 18 00:46:48 crc kubenswrapper[4847]: I0218 00:46:48.564678 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72","Type":"ContainerDied","Data":"c95afc7c0b71ea1ba7a1240ed14a3e19a9016fc1ac110bbc7eb5a84ad7d6bc43"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.544387 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-t5ftr"] Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.546134 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.548357 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.575943 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t5ftr"] Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610570 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9267970-665c-43c5-be4c-1cd26b39ad2d-config\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610646 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcbt\" (UniqueName: \"kubernetes.io/projected/a9267970-665c-43c5-be4c-1cd26b39ad2d-kube-api-access-lhcbt\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610696 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovs-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610722 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovn-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610813 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-combined-ca-bundle\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.610856 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.618468 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0375fa1c-b349-44b5-8ba6-1d1afe1715ce","Type":"ContainerStarted","Data":"fb5591ed215956a552a616c15a648d98d59faa59c1ad578b2f4c6e631afa20ea"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.619066 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.620659 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"085dadd3-8aae-4c94-84e4-6289f1e537e1","Type":"ContainerStarted","Data":"86982d7a712414eed631654dbed1d64e42985a26193df8f04208865659942c84"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.621175 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.625420 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"109c4d3d-c276-45ed-93d2-d1414e156fb9","Type":"ContainerStarted","Data":"32e4a9f0d34639970a98ecaea4803217661797106c799e49e903c98eee5d06ef"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.628558 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72","Type":"ContainerStarted","Data":"f7838dbf244fd0cbcee5b3983e694246005b7f128be151f918228c046c00bd26"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.634489 4847 generic.go:334] "Generic (PLEG): container finished" podID="b233fc1e-4730-4c0c-bf0d-741bf86d3a19" containerID="029bad5895744be5f6c219d797d218a7f57b03b5f84c33badd5027068f5be64e" exitCode=0 Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.634533 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5k8p" event={"ID":"b233fc1e-4730-4c0c-bf0d-741bf86d3a19","Type":"ContainerDied","Data":"029bad5895744be5f6c219d797d218a7f57b03b5f84c33badd5027068f5be64e"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.650317 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc","Type":"ContainerStarted","Data":"817fb3afe243d100701e4c386da7b5b9562ec81f3b792b8b645f817c18efb602"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.653879 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b50a5134-eeac-410c-8f07-b6a4c141386e","Type":"ContainerStarted","Data":"3976cc9aaca762e7eb48ea7a949a8ec214babac13492f80f07aed1ad4ec65462"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.658745 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.609514841 podStartE2EDuration="26.658725271s" podCreationTimestamp="2026-02-18 00:46:26 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.389593666 +0000 UTC m=+1277.766944608" lastFinishedPulling="2026-02-18 00:46:51.438804096 +0000 UTC m=+1284.816155038" observedRunningTime="2026-02-18 00:46:52.631771683 +0000 UTC m=+1286.009122625" watchObservedRunningTime="2026-02-18 00:46:52.658725271 +0000 UTC m=+1286.036076213" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.664827 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft" event={"ID":"2801a17e-6108-4ffe-9eac-7068b93707e1","Type":"ContainerStarted","Data":"ffe2a67ff4f766eed77e282f21f660e451bbd4f632d2d5cc182ab72408bef506"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.665367 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xh6ft" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.674698 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.059811834 podStartE2EDuration="32.674672999s" podCreationTimestamp="2026-02-18 00:46:20 +0000 UTC" firstStartedPulling="2026-02-18 00:46:22.882883144 +0000 UTC m=+1256.260234086" lastFinishedPulling="2026-02-18 00:46:43.497744309 +0000 UTC m=+1276.875095251" observedRunningTime="2026-02-18 00:46:52.668421361 +0000 UTC m=+1286.045772313" watchObservedRunningTime="2026-02-18 00:46:52.674672999 +0000 UTC m=+1286.052023941" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.693097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" event={"ID":"8095b217-447f-4789-8ef4-fa117075737c","Type":"ContainerStarted","Data":"980ea5bcb863cfc64011c473ea7f9238d09b2e02e09fcd15d15c475408cbfa36"} Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.710100 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.710080717 podStartE2EDuration="30.710080717s" podCreationTimestamp="2026-02-18 00:46:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:46:52.698439802 +0000 UTC m=+1286.075790744" watchObservedRunningTime="2026-02-18 00:46:52.710080717 +0000 UTC m=+1286.087431649" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718051 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-combined-ca-bundle\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718136 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718164 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9267970-665c-43c5-be4c-1cd26b39ad2d-config\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718195 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhcbt\" (UniqueName: \"kubernetes.io/projected/a9267970-665c-43c5-be4c-1cd26b39ad2d-kube-api-access-lhcbt\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718265 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovs-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.718297 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovn-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.726927 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.727032 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovs-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.727727 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a9267970-665c-43c5-be4c-1cd26b39ad2d-ovn-rundir\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.731554 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9267970-665c-43c5-be4c-1cd26b39ad2d-config\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.741628 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.741962 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.742057 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9267970-665c-43c5-be4c-1cd26b39ad2d-combined-ca-bundle\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.745907 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.754228 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.758397 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhcbt\" (UniqueName: \"kubernetes.io/projected/a9267970-665c-43c5-be4c-1cd26b39ad2d-kube-api-access-lhcbt\") pod \"ovn-controller-metrics-t5ftr\" (UID: \"a9267970-665c-43c5-be4c-1cd26b39ad2d\") " pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.766306 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=23.639926469 podStartE2EDuration="29.766283878s" podCreationTimestamp="2026-02-18 00:46:23 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.384494655 +0000 UTC m=+1277.761845597" lastFinishedPulling="2026-02-18 00:46:50.510852044 +0000 UTC m=+1283.888203006" observedRunningTime="2026-02-18 00:46:52.754463818 +0000 UTC m=+1286.131814760" watchObservedRunningTime="2026-02-18 00:46:52.766283878 +0000 UTC m=+1286.143634820" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.794600 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.816876 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xh6ft" podStartSLOduration=17.783910159 podStartE2EDuration="23.816855946s" podCreationTimestamp="2026-02-18 00:46:29 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.478166553 +0000 UTC m=+1277.855517495" lastFinishedPulling="2026-02-18 00:46:50.51111233 +0000 UTC m=+1283.888463282" observedRunningTime="2026-02-18 00:46:52.802953246 +0000 UTC m=+1286.180304188" watchObservedRunningTime="2026-02-18 00:46:52.816855946 +0000 UTC m=+1286.194206898" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.874781 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-t5ftr" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.929736 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.929816 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5ln\" (UniqueName: \"kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.929858 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.929900 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:52 crc kubenswrapper[4847]: I0218 00:46:52.938916 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.026530 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-25tvl" podStartSLOduration=20.557954209000002 podStartE2EDuration="27.02651094s" podCreationTimestamp="2026-02-18 00:46:26 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.287045268 +0000 UTC m=+1277.664396210" lastFinishedPulling="2026-02-18 00:46:50.755601999 +0000 UTC m=+1284.132952941" observedRunningTime="2026-02-18 00:46:52.960026116 +0000 UTC m=+1286.337377058" watchObservedRunningTime="2026-02-18 00:46:53.02651094 +0000 UTC m=+1286.403861882" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.033849 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5ln\" (UniqueName: \"kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.033917 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.033948 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.034036 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.034866 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.045481 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.046056 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.075668 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.077242 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.081091 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.100416 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.240053 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.240365 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.240382 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.240445 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.240495 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlm6r\" (UniqueName: \"kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.341934 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.341973 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.342049 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.342107 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlm6r\" (UniqueName: \"kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.342148 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.343060 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.343744 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.344209 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.344575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.382773 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5ln\" (UniqueName: \"kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln\") pod \"dnsmasq-dns-5bf47b49b7-vk958\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.384887 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlm6r\" (UniqueName: \"kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r\") pod \"dnsmasq-dns-8554648995-k9ngj\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.442887 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.503230 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.503285 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.536755 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.562649 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.650290 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc\") pod \"1213dcf5-90b2-4824-865c-baf5f7646ebc\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.651060 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1213dcf5-90b2-4824-865c-baf5f7646ebc" (UID: "1213dcf5-90b2-4824-865c-baf5f7646ebc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.651290 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z5dh\" (UniqueName: \"kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh\") pod \"1213dcf5-90b2-4824-865c-baf5f7646ebc\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.653200 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config" (OuterVolumeSpecName: "config") pod "1213dcf5-90b2-4824-865c-baf5f7646ebc" (UID: "1213dcf5-90b2-4824-865c-baf5f7646ebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.655735 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config\") pod \"1213dcf5-90b2-4824-865c-baf5f7646ebc\" (UID: \"1213dcf5-90b2-4824-865c-baf5f7646ebc\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.656971 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.656990 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1213dcf5-90b2-4824-865c-baf5f7646ebc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.697807 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.719247 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8946t" event={"ID":"f1b86276-e92b-4785-9d34-b66040d0ece2","Type":"ContainerDied","Data":"ada2155ed20ec3bb1da1aa7bb927cf541ab2ff087c7a02faaff3a846ddc357b7"} Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.719306 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8946t" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.722176 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" event={"ID":"1213dcf5-90b2-4824-865c-baf5f7646ebc","Type":"ContainerDied","Data":"a1293589e8deb243e5eac7c1a00adcfd23859596e9dd393e60e43b95e2bd8334"} Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.723202 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-79d5g" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.783044 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.783097 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.860319 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config\") pod \"f1b86276-e92b-4785-9d34-b66040d0ece2\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.860594 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc\") pod \"f1b86276-e92b-4785-9d34-b66040d0ece2\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.860653 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hbc\" (UniqueName: \"kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc\") pod \"f1b86276-e92b-4785-9d34-b66040d0ece2\" (UID: \"f1b86276-e92b-4785-9d34-b66040d0ece2\") " Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.861035 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config" (OuterVolumeSpecName: "config") pod "f1b86276-e92b-4785-9d34-b66040d0ece2" (UID: "f1b86276-e92b-4785-9d34-b66040d0ece2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.861093 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1b86276-e92b-4785-9d34-b66040d0ece2" (UID: "f1b86276-e92b-4785-9d34-b66040d0ece2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.861379 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:53 crc kubenswrapper[4847]: I0218 00:46:53.861399 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1b86276-e92b-4785-9d34-b66040d0ece2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.020217 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh" (OuterVolumeSpecName: "kube-api-access-7z5dh") pod "1213dcf5-90b2-4824-865c-baf5f7646ebc" (UID: "1213dcf5-90b2-4824-865c-baf5f7646ebc"). InnerVolumeSpecName "kube-api-access-7z5dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.024783 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc" (OuterVolumeSpecName: "kube-api-access-m4hbc") pod "f1b86276-e92b-4785-9d34-b66040d0ece2" (UID: "f1b86276-e92b-4785-9d34-b66040d0ece2"). InnerVolumeSpecName "kube-api-access-m4hbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.067738 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z5dh\" (UniqueName: \"kubernetes.io/projected/1213dcf5-90b2-4824-865c-baf5f7646ebc-kube-api-access-7z5dh\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.067774 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hbc\" (UniqueName: \"kubernetes.io/projected/f1b86276-e92b-4785-9d34-b66040d0ece2-kube-api-access-m4hbc\") on node \"crc\" DevicePath \"\"" Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.085015 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.093009 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-79d5g"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.370527 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.410636 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.433336 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8946t"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.442469 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-t5ftr"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.591666 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:46:54 crc kubenswrapper[4847]: I0218 00:46:54.734563 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5k8p" event={"ID":"b233fc1e-4730-4c0c-bf0d-741bf86d3a19","Type":"ContainerStarted","Data":"610ff146cf2684d6bd4c76805686c6cd6fde54de32424e37f304cc6bfd4112c0"} Feb 18 00:46:55 crc kubenswrapper[4847]: W0218 00:46:55.044980 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3404c138_4060_43de_9cc5_d6017b245f2c.slice/crio-ca1b50dd2c0fc04b11e7b6b959e90324b2f3b3ddea69388663b22990db92fd51 WatchSource:0}: Error finding container ca1b50dd2c0fc04b11e7b6b959e90324b2f3b3ddea69388663b22990db92fd51: Status 404 returned error can't find the container with id ca1b50dd2c0fc04b11e7b6b959e90324b2f3b3ddea69388663b22990db92fd51 Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.424630 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1213dcf5-90b2-4824-865c-baf5f7646ebc" path="/var/lib/kubelet/pods/1213dcf5-90b2-4824-865c-baf5f7646ebc/volumes" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.425239 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b86276-e92b-4785-9d34-b66040d0ece2" path="/var/lib/kubelet/pods/f1b86276-e92b-4785-9d34-b66040d0ece2/volumes" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.746247 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h5k8p" event={"ID":"b233fc1e-4730-4c0c-bf0d-741bf86d3a19","Type":"ContainerStarted","Data":"9f10396c685df481eed091c224011b16a8fa04d6f82f5a9c65612cfe50150f2c"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.747685 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.747841 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.750448 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbdd48eb-2162-4fc5-9d56-3e58835ac6bc","Type":"ContainerStarted","Data":"f470bfa6fe06f3e0aaddb20751ac99e39effbd220a0695f1a033c66c6bd887dc"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.752987 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t5ftr" event={"ID":"a9267970-665c-43c5-be4c-1cd26b39ad2d","Type":"ContainerStarted","Data":"3f765f550be3a3716a778105c505139a5c04243f7daa2aedc60636e4bbed250f"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.755419 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerStarted","Data":"bbe804413d16311bc73e463a320aae7e1af7fcec38d9771f74f50bb56dd17c1f"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.758551 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"b50a5134-eeac-410c-8f07-b6a4c141386e","Type":"ContainerStarted","Data":"53e0cee10a02bf432fa621916a6195164f9fec004fa04d6bab6d39cd3cb87020"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.760401 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k9ngj" event={"ID":"3404c138-4060-43de-9cc5-d6017b245f2c","Type":"ContainerStarted","Data":"ca1b50dd2c0fc04b11e7b6b959e90324b2f3b3ddea69388663b22990db92fd51"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.763254 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" event={"ID":"c786cee2-3b0c-42f3-ba21-c5bb877332ef","Type":"ContainerStarted","Data":"40f865ec2751861c902ba9a5974b98c1fa60fb15d7412a1d52178fbb73b82144"} Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.767157 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-h5k8p" podStartSLOduration=21.648660738 podStartE2EDuration="26.767136632s" podCreationTimestamp="2026-02-18 00:46:29 +0000 UTC" firstStartedPulling="2026-02-18 00:46:45.71208988 +0000 UTC m=+1279.089440822" lastFinishedPulling="2026-02-18 00:46:50.830565774 +0000 UTC m=+1284.207916716" observedRunningTime="2026-02-18 00:46:55.763358942 +0000 UTC m=+1289.140709884" watchObservedRunningTime="2026-02-18 00:46:55.767136632 +0000 UTC m=+1289.144487574" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.843681 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=14.513642372 podStartE2EDuration="24.843662934s" podCreationTimestamp="2026-02-18 00:46:31 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.802539544 +0000 UTC m=+1278.179890486" lastFinishedPulling="2026-02-18 00:46:55.132560066 +0000 UTC m=+1288.509911048" observedRunningTime="2026-02-18 00:46:55.842425334 +0000 UTC m=+1289.219776276" watchObservedRunningTime="2026-02-18 00:46:55.843662934 +0000 UTC m=+1289.221013876" Feb 18 00:46:55 crc kubenswrapper[4847]: I0218 00:46:55.846576 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=18.442633987 podStartE2EDuration="27.846567702s" podCreationTimestamp="2026-02-18 00:46:28 +0000 UTC" firstStartedPulling="2026-02-18 00:46:45.717419446 +0000 UTC m=+1279.094770388" lastFinishedPulling="2026-02-18 00:46:55.121353161 +0000 UTC m=+1288.498704103" observedRunningTime="2026-02-18 00:46:55.817829522 +0000 UTC m=+1289.195180464" watchObservedRunningTime="2026-02-18 00:46:55.846567702 +0000 UTC m=+1289.223918644" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.566909 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.600518 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.711110 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.774829 4847 generic.go:334] "Generic (PLEG): container finished" podID="3404c138-4060-43de-9cc5-d6017b245f2c" containerID="09c81b89f7191b6fee222a821018edf1835400878ae32eca27fb0d6111a8218e" exitCode=0 Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.774883 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k9ngj" event={"ID":"3404c138-4060-43de-9cc5-d6017b245f2c","Type":"ContainerDied","Data":"09c81b89f7191b6fee222a821018edf1835400878ae32eca27fb0d6111a8218e"} Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.776320 4847 generic.go:334] "Generic (PLEG): container finished" podID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerID="920f2506e8d47f1682c23ebabeeb2e218f5769dadaea720584eb48aa92a4ed65" exitCode=0 Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.776405 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" event={"ID":"c786cee2-3b0c-42f3-ba21-c5bb877332ef","Type":"ContainerDied","Data":"920f2506e8d47f1682c23ebabeeb2e218f5769dadaea720584eb48aa92a4ed65"} Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.783426 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-t5ftr" event={"ID":"a9267970-665c-43c5-be4c-1cd26b39ad2d","Type":"ContainerStarted","Data":"dc86b1e0b5ef4e1512d4c2794581b2796c00b03b193ece5d840b3b8922508a3e"} Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.784710 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.854838 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-t5ftr" podStartSLOduration=4.13533975 podStartE2EDuration="4.854823616s" podCreationTimestamp="2026-02-18 00:46:52 +0000 UTC" firstStartedPulling="2026-02-18 00:46:55.050886902 +0000 UTC m=+1288.428237844" lastFinishedPulling="2026-02-18 00:46:55.770370768 +0000 UTC m=+1289.147721710" observedRunningTime="2026-02-18 00:46:56.852102122 +0000 UTC m=+1290.229453064" watchObservedRunningTime="2026-02-18 00:46:56.854823616 +0000 UTC m=+1290.232174558" Feb 18 00:46:56 crc kubenswrapper[4847]: I0218 00:46:56.858952 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.208339 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.248466 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.794307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerStarted","Data":"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918"} Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.796353 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k9ngj" event={"ID":"3404c138-4060-43de-9cc5-d6017b245f2c","Type":"ContainerStarted","Data":"54888030145adc68ec0c91dbb42e7189a0e53a67b037568a91cbb8747dbc0545"} Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.796547 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.798176 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" event={"ID":"c786cee2-3b0c-42f3-ba21-c5bb877332ef","Type":"ContainerStarted","Data":"033f04cb72d8bccc691dbe922eb06472bc74f5e37959bdf4c299af13fd4259cd"} Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.799548 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.841919 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-k9ngj" podStartSLOduration=5.390037998 podStartE2EDuration="5.841901527s" podCreationTimestamp="2026-02-18 00:46:52 +0000 UTC" firstStartedPulling="2026-02-18 00:46:55.058184895 +0000 UTC m=+1288.435535867" lastFinishedPulling="2026-02-18 00:46:55.510048444 +0000 UTC m=+1288.887399396" observedRunningTime="2026-02-18 00:46:57.841113218 +0000 UTC m=+1291.218464160" watchObservedRunningTime="2026-02-18 00:46:57.841901527 +0000 UTC m=+1291.219252469" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.866766 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" podStartSLOduration=5.387015886 podStartE2EDuration="5.866746535s" podCreationTimestamp="2026-02-18 00:46:52 +0000 UTC" firstStartedPulling="2026-02-18 00:46:55.04446773 +0000 UTC m=+1288.421818682" lastFinishedPulling="2026-02-18 00:46:55.524198389 +0000 UTC m=+1288.901549331" observedRunningTime="2026-02-18 00:46:57.86396535 +0000 UTC m=+1291.241316302" watchObservedRunningTime="2026-02-18 00:46:57.866746535 +0000 UTC m=+1291.244097487" Feb 18 00:46:57 crc kubenswrapper[4847]: I0218 00:46:57.881384 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.169651 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.171745 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.184797 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.184964 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.184981 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.185546 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-phbfh" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.199687 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278567 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278666 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278702 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-scripts\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278732 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278750 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrpwv\" (UniqueName: \"kubernetes.io/projected/fcef7123-9c18-4431-b436-e6c6e6881f5a-kube-api-access-qrpwv\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278768 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.278805 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-config\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.380881 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.380955 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.380996 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-scripts\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.381030 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.381047 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrpwv\" (UniqueName: \"kubernetes.io/projected/fcef7123-9c18-4431-b436-e6c6e6881f5a-kube-api-access-qrpwv\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.381067 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.381105 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-config\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.381992 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-scripts\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.382014 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcef7123-9c18-4431-b436-e6c6e6881f5a-config\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.384820 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.388098 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.388574 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.388686 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcef7123-9c18-4431-b436-e6c6e6881f5a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.406820 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrpwv\" (UniqueName: \"kubernetes.io/projected/fcef7123-9c18-4431-b436-e6c6e6881f5a-kube-api-access-qrpwv\") pod \"ovn-northd-0\" (UID: \"fcef7123-9c18-4431-b436-e6c6e6881f5a\") " pod="openstack/ovn-northd-0" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.443585 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:46:58 crc kubenswrapper[4847]: I0218 00:46:58.492325 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 18 00:46:59 crc kubenswrapper[4847]: I0218 00:46:59.102795 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 18 00:46:59 crc kubenswrapper[4847]: I0218 00:46:59.133267 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 18 00:46:59 crc kubenswrapper[4847]: I0218 00:46:59.814769 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fcef7123-9c18-4431-b436-e6c6e6881f5a","Type":"ContainerStarted","Data":"88e0f4c4565db26857efa9022f3bb9deb3459192281da92303f0180813224311"} Feb 18 00:47:00 crc kubenswrapper[4847]: I0218 00:47:00.826471 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fcef7123-9c18-4431-b436-e6c6e6881f5a","Type":"ContainerStarted","Data":"f3a1f68f5a4c4f4a11c14159f3e0c80dcbf4f0b377ed4316e424add71ba5205f"} Feb 18 00:47:00 crc kubenswrapper[4847]: I0218 00:47:00.826841 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fcef7123-9c18-4431-b436-e6c6e6881f5a","Type":"ContainerStarted","Data":"26dd19c749e6082472c004f8418d73c632f55486c8b561d76ccd3a248e5bd36b"} Feb 18 00:47:00 crc kubenswrapper[4847]: I0218 00:47:00.826858 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 18 00:47:00 crc kubenswrapper[4847]: I0218 00:47:00.854966 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.844927705 podStartE2EDuration="2.85493814s" podCreationTimestamp="2026-02-18 00:46:58 +0000 UTC" firstStartedPulling="2026-02-18 00:46:59.108639571 +0000 UTC m=+1292.485990533" lastFinishedPulling="2026-02-18 00:47:00.118650026 +0000 UTC m=+1293.496000968" observedRunningTime="2026-02-18 00:47:00.843136871 +0000 UTC m=+1294.220487823" watchObservedRunningTime="2026-02-18 00:47:00.85493814 +0000 UTC m=+1294.232289092" Feb 18 00:47:01 crc kubenswrapper[4847]: I0218 00:47:01.936326 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.083658 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.318674 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.318738 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.423076 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.483482 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qfzh5"] Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.484918 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.486765 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.503205 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qfzh5"] Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.563528 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blrwf\" (UniqueName: \"kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.563595 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.665821 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blrwf\" (UniqueName: \"kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.666186 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.667574 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.694528 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blrwf\" (UniqueName: \"kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf\") pod \"root-account-create-update-qfzh5\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.829380 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.848508 4847 generic.go:334] "Generic (PLEG): container finished" podID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerID="bbe804413d16311bc73e463a320aae7e1af7fcec38d9771f74f50bb56dd17c1f" exitCode=0 Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.848746 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerDied","Data":"bbe804413d16311bc73e463a320aae7e1af7fcec38d9771f74f50bb56dd17c1f"} Feb 18 00:47:02 crc kubenswrapper[4847]: I0218 00:47:02.949781 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.436873 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qfzh5"] Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.444756 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.539551 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.591186 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.863833 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfzh5" event={"ID":"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b","Type":"ContainerStarted","Data":"fd6fa9869acc2ce9ffe611456f74bb577a74eecdb72c6b050783772f0a9b92fe"} Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.863892 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfzh5" event={"ID":"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b","Type":"ContainerStarted","Data":"cc6916034656dfe19f0f2a18570a946f8b2362b745386fe9eae71063fe982726"} Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.864041 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="dnsmasq-dns" containerID="cri-o://033f04cb72d8bccc691dbe922eb06472bc74f5e37959bdf4c299af13fd4259cd" gracePeriod=10 Feb 18 00:47:03 crc kubenswrapper[4847]: I0218 00:47:03.913887 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-qfzh5" podStartSLOduration=1.9138649380000001 podStartE2EDuration="1.913864938s" podCreationTimestamp="2026-02-18 00:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:03.90420656 +0000 UTC m=+1297.281557502" watchObservedRunningTime="2026-02-18 00:47:03.913864938 +0000 UTC m=+1297.291215870" Feb 18 00:47:04 crc kubenswrapper[4847]: I0218 00:47:04.877480 4847 generic.go:334] "Generic (PLEG): container finished" podID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerID="033f04cb72d8bccc691dbe922eb06472bc74f5e37959bdf4c299af13fd4259cd" exitCode=0 Feb 18 00:47:04 crc kubenswrapper[4847]: I0218 00:47:04.877721 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" event={"ID":"c786cee2-3b0c-42f3-ba21-c5bb877332ef","Type":"ContainerDied","Data":"033f04cb72d8bccc691dbe922eb06472bc74f5e37959bdf4c299af13fd4259cd"} Feb 18 00:47:04 crc kubenswrapper[4847]: I0218 00:47:04.917360 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wmjzw"] Feb 18 00:47:04 crc kubenswrapper[4847]: I0218 00:47:04.918658 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:04 crc kubenswrapper[4847]: I0218 00:47:04.938650 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wmjzw"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.030072 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvtg\" (UniqueName: \"kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.030428 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.051410 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7fd1-account-create-update-5t9jf"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.052839 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.060113 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.064309 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fd1-account-create-update-5t9jf"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.131737 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.131837 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqkn\" (UniqueName: \"kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.131888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvtg\" (UniqueName: \"kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.131932 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.132776 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.140650 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qffjj"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.141827 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.153545 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvtg\" (UniqueName: \"kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg\") pod \"keystone-db-create-wmjzw\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.154083 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qffjj"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.232894 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.233234 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqkn\" (UniqueName: \"kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.233366 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmh4\" (UniqueName: \"kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.233458 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.234697 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.246365 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1377-account-create-update-76bd5"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.246741 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.248099 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.252894 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.257552 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqkn\" (UniqueName: \"kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn\") pod \"keystone-7fd1-account-create-update-5t9jf\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.260875 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1377-account-create-update-76bd5"] Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.335179 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jpx5\" (UniqueName: \"kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.335304 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.335350 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.335445 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcmh4\" (UniqueName: \"kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.336551 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.354646 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcmh4\" (UniqueName: \"kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4\") pod \"placement-db-create-qffjj\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.374495 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.436407 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jpx5\" (UniqueName: \"kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.436500 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.437575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.465741 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jpx5\" (UniqueName: \"kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5\") pod \"placement-1377-account-create-update-76bd5\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.499116 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qffjj" Feb 18 00:47:05 crc kubenswrapper[4847]: I0218 00:47:05.599315 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.426065 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h4rtr"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.429382 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.460961 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.461000 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4jb\" (UniqueName: \"kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.466712 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h4rtr"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.517487 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.518996 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.549224 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.563180 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.563226 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4jb\" (UniqueName: \"kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.564838 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.574667 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-a882-account-create-update-zprw4"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.575857 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.578032 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.608746 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4jb\" (UniqueName: \"kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb\") pod \"mysqld-exporter-openstack-db-create-h4rtr\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.639095 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a882-account-create-update-zprw4"] Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.665022 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.666252 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.666307 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.666427 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn49w\" (UniqueName: \"kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.666453 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.767885 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8pw\" (UniqueName: \"kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.768397 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn49w\" (UniqueName: \"kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.768548 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.768844 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.769003 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.769246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.769422 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.769975 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.770085 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.770739 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.771815 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.782537 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.791866 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn49w\" (UniqueName: \"kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w\") pod \"dnsmasq-dns-b8fbc5445-mlss6\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.872509 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.872613 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8pw\" (UniqueName: \"kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.873306 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.892331 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8pw\" (UniqueName: \"kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw\") pod \"mysqld-exporter-a882-account-create-update-zprw4\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.904198 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.907151 4847 generic.go:334] "Generic (PLEG): container finished" podID="39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" containerID="fd6fa9869acc2ce9ffe611456f74bb577a74eecdb72c6b050783772f0a9b92fe" exitCode=0 Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.907210 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfzh5" event={"ID":"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b","Type":"ContainerDied","Data":"fd6fa9869acc2ce9ffe611456f74bb577a74eecdb72c6b050783772f0a9b92fe"} Feb 18 00:47:06 crc kubenswrapper[4847]: I0218 00:47:06.979464 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.668869 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wmjzw"] Feb 18 00:47:07 crc kubenswrapper[4847]: W0218 00:47:07.683404 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899af78e_0f52_4b70_8817_47ea4fe4d344.slice/crio-7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9 WatchSource:0}: Error finding container 7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9: Status 404 returned error can't find the container with id 7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9 Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.719216 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.725808 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.730527 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-v92pd" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.730751 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.731561 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.731644 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.748710 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.839626 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a882-account-create-update-zprw4"] Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.847866 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h4rtr"] Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.865843 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qffjj"] Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901269 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901344 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx6lg\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-kube-api-access-dx6lg\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901382 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623045fa-a3f1-4ad5-a5f7-361f31303bfb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901516 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-cache\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901617 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.901738 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-lock\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.934155 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" event={"ID":"d6d8310c-c9e5-49cc-bc20-af6aacf1487d","Type":"ContainerStarted","Data":"70d998927d3b2d1b781a9115d698376cf238d66e4102bf3e1df0e6cf6dc491a2"} Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.946136 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wmjzw" event={"ID":"899af78e-0f52-4b70-8817-47ea4fe4d344","Type":"ContainerStarted","Data":"7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9"} Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.947829 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qffjj" event={"ID":"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48","Type":"ContainerStarted","Data":"2ed2f3289c21d08b940cfa26fc84eccbf3ce575c781b4c4ec4ba3b7b95bc58de"} Feb 18 00:47:07 crc kubenswrapper[4847]: I0218 00:47:07.949286 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" event={"ID":"4dd945e5-694d-47a5-9817-8e6cff5a1c8b","Type":"ContainerStarted","Data":"46fcba4fdf626f5da76d33027d0b372ee079e4ae782a197592bd3071b3ab18fa"} Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.004626 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623045fa-a3f1-4ad5-a5f7-361f31303bfb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.004893 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-cache\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.004927 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.004968 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-lock\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.005070 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.005089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx6lg\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-kube-api-access-dx6lg\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.005564 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.005615 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.005590 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.005673 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:08.505652073 +0000 UTC m=+1301.883003015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.005832 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-lock\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.006091 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/623045fa-a3f1-4ad5-a5f7-361f31303bfb-cache\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.033471 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/623045fa-a3f1-4ad5-a5f7-361f31303bfb-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.036286 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx6lg\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-kube-api-access-dx6lg\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.047303 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.054506 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fd1-account-create-update-5t9jf"] Feb 18 00:47:08 crc kubenswrapper[4847]: W0218 00:47:08.084580 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecd4a05d_c720_48e0_9ef0_1101d4ee0a17.slice/crio-f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756 WatchSource:0}: Error finding container f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756: Status 404 returned error can't find the container with id f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756 Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.152321 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1377-account-create-update-76bd5"] Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.171558 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:47:08 crc kubenswrapper[4847]: W0218 00:47:08.176571 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa5356b9_df2c_412d_ac6d_4039afc1286b.slice/crio-c0d2b7b06316a82dd2a36e19e816e88317fe9ea47599c7e7fe0406a458aafd49 WatchSource:0}: Error finding container c0d2b7b06316a82dd2a36e19e816e88317fe9ea47599c7e7fe0406a458aafd49: Status 404 returned error can't find the container with id c0d2b7b06316a82dd2a36e19e816e88317fe9ea47599c7e7fe0406a458aafd49 Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.444210 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.133:5353: connect: connection refused" Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.545508 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.545760 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.545792 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: E0218 00:47:08.545862 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:09.545837893 +0000 UTC m=+1302.923188835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.976128 4847 generic.go:334] "Generic (PLEG): container finished" podID="1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" containerID="3a942e42f549a01a2027b5b1a3435bbe79f3c127a3d1d79fbff2faa2e1123641" exitCode=0 Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.976493 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qffjj" event={"ID":"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48","Type":"ContainerDied","Data":"3a942e42f549a01a2027b5b1a3435bbe79f3c127a3d1d79fbff2faa2e1123641"} Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.986895 4847 generic.go:334] "Generic (PLEG): container finished" podID="4dd945e5-694d-47a5-9817-8e6cff5a1c8b" containerID="e2e70a0142a8388468f65caf6ae465301e9e17d4f5a9265048860af00a955451" exitCode=0 Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.986983 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" event={"ID":"4dd945e5-694d-47a5-9817-8e6cff5a1c8b","Type":"ContainerDied","Data":"e2e70a0142a8388468f65caf6ae465301e9e17d4f5a9265048860af00a955451"} Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.988970 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1377-account-create-update-76bd5" event={"ID":"9a17f384-68bb-4ce1-be12-102c477b5968","Type":"ContainerStarted","Data":"b0126415fd97ccd5017cca45f34db9aadbb8d794e52647ffcdec3cf4b9f9b3d5"} Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.992953 4847 generic.go:334] "Generic (PLEG): container finished" podID="ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" containerID="c188b0d181ea7fa661b29d20be5bee627936357fcb13f90477e11b5fbe6f1bfa" exitCode=0 Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.993023 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fd1-account-create-update-5t9jf" event={"ID":"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17","Type":"ContainerDied","Data":"c188b0d181ea7fa661b29d20be5bee627936357fcb13f90477e11b5fbe6f1bfa"} Feb 18 00:47:08 crc kubenswrapper[4847]: I0218 00:47:08.993041 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fd1-account-create-update-5t9jf" event={"ID":"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17","Type":"ContainerStarted","Data":"f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756"} Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.002969 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" event={"ID":"aa5356b9-df2c-412d-ac6d-4039afc1286b","Type":"ContainerStarted","Data":"c0d2b7b06316a82dd2a36e19e816e88317fe9ea47599c7e7fe0406a458aafd49"} Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.020818 4847 generic.go:334] "Generic (PLEG): container finished" podID="d6d8310c-c9e5-49cc-bc20-af6aacf1487d" containerID="15ff6b6247fda6a53fca1b34fde94429cee51a8e745e73c5d5eb6a24d54348f0" exitCode=0 Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.020902 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" event={"ID":"d6d8310c-c9e5-49cc-bc20-af6aacf1487d","Type":"ContainerDied","Data":"15ff6b6247fda6a53fca1b34fde94429cee51a8e745e73c5d5eb6a24d54348f0"} Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.026740 4847 generic.go:334] "Generic (PLEG): container finished" podID="899af78e-0f52-4b70-8817-47ea4fe4d344" containerID="6902ccc554a3be0078859f47e8ab23be9b6781629de6ebe8ca99db5aa763a2ea" exitCode=0 Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.026766 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wmjzw" event={"ID":"899af78e-0f52-4b70-8817-47ea4fe4d344","Type":"ContainerDied","Data":"6902ccc554a3be0078859f47e8ab23be9b6781629de6ebe8ca99db5aa763a2ea"} Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.341224 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-8qtg4"] Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.342523 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.350076 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8qtg4"] Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.365653 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.365817 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2fx\" (UniqueName: \"kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.444363 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a47c-account-create-update-fcsjz"] Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.448729 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.451974 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a47c-account-create-update-fcsjz"] Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.454974 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.466854 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.466921 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbx5b\" (UniqueName: \"kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.467010 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn2fx\" (UniqueName: \"kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.467027 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.468667 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.491387 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn2fx\" (UniqueName: \"kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx\") pod \"glance-db-create-8qtg4\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.568089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbx5b\" (UniqueName: \"kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.568254 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.568307 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:09 crc kubenswrapper[4847]: E0218 00:47:09.568517 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:09 crc kubenswrapper[4847]: E0218 00:47:09.568535 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:09 crc kubenswrapper[4847]: E0218 00:47:09.568584 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:11.568566059 +0000 UTC m=+1304.945917001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.569267 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.584722 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbx5b\" (UniqueName: \"kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b\") pod \"glance-a47c-account-create-update-fcsjz\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.694218 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:09 crc kubenswrapper[4847]: I0218 00:47:09.767787 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.515981 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-8rvhw"] Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.517567 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.519190 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.519490 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.524826 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8rvhw"] Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.527099 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623586 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623681 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623708 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623733 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623786 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b658n\" (UniqueName: \"kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623819 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623888 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.623926 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: E0218 00:47:11.624116 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:11 crc kubenswrapper[4847]: E0218 00:47:11.624133 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:11 crc kubenswrapper[4847]: E0218 00:47:11.624180 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:15.624158201 +0000 UTC m=+1309.001509143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b658n\" (UniqueName: \"kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726145 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726203 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726234 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726272 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726326 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.726343 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.727265 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.727476 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.727583 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.746576 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.750119 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b658n\" (UniqueName: \"kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.750126 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.751165 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf\") pod \"swift-ring-rebalance-8rvhw\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:11 crc kubenswrapper[4847]: I0218 00:47:11.837039 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.292340 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.300498 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.330557 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.396242 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.441782 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qffjj" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.449770 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvtg\" (UniqueName: \"kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg\") pod \"899af78e-0f52-4b70-8817-47ea4fe4d344\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.449830 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts\") pod \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.449868 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config\") pod \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.449892 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5ln\" (UniqueName: \"kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln\") pod \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450392 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4dd945e5-694d-47a5-9817-8e6cff5a1c8b" (UID: "4dd945e5-694d-47a5-9817-8e6cff5a1c8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450456 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts\") pod \"899af78e-0f52-4b70-8817-47ea4fe4d344\" (UID: \"899af78e-0f52-4b70-8817-47ea4fe4d344\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450491 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts\") pod \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450511 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc\") pod \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450542 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8pw\" (UniqueName: \"kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw\") pod \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\" (UID: \"4dd945e5-694d-47a5-9817-8e6cff5a1c8b\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450586 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg4jb\" (UniqueName: \"kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb\") pod \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\" (UID: \"d6d8310c-c9e5-49cc-bc20-af6aacf1487d\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450658 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb\") pod \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\" (UID: \"c786cee2-3b0c-42f3-ba21-c5bb877332ef\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450682 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts\") pod \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.450992 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.452139 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" (UID: "1b9d5176-c1d3-4862-aa1d-4b0c5c412d48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.452221 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "899af78e-0f52-4b70-8817-47ea4fe4d344" (UID: "899af78e-0f52-4b70-8817-47ea4fe4d344"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.456335 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6d8310c-c9e5-49cc-bc20-af6aacf1487d" (UID: "d6d8310c-c9e5-49cc-bc20-af6aacf1487d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.459046 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb" (OuterVolumeSpecName: "kube-api-access-wg4jb") pod "d6d8310c-c9e5-49cc-bc20-af6aacf1487d" (UID: "d6d8310c-c9e5-49cc-bc20-af6aacf1487d"). InnerVolumeSpecName "kube-api-access-wg4jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.469282 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg" (OuterVolumeSpecName: "kube-api-access-fhvtg") pod "899af78e-0f52-4b70-8817-47ea4fe4d344" (UID: "899af78e-0f52-4b70-8817-47ea4fe4d344"). InnerVolumeSpecName "kube-api-access-fhvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.470329 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw" (OuterVolumeSpecName: "kube-api-access-cn8pw") pod "4dd945e5-694d-47a5-9817-8e6cff5a1c8b" (UID: "4dd945e5-694d-47a5-9817-8e6cff5a1c8b"). InnerVolumeSpecName "kube-api-access-cn8pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.473472 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.483097 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln" (OuterVolumeSpecName: "kube-api-access-cx5ln") pod "c786cee2-3b0c-42f3-ba21-c5bb877332ef" (UID: "c786cee2-3b0c-42f3-ba21-c5bb877332ef"). InnerVolumeSpecName "kube-api-access-cx5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.488780 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552312 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts\") pod \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552357 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blrwf\" (UniqueName: \"kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf\") pod \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552493 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts\") pod \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\" (UID: \"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552538 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcmh4\" (UniqueName: \"kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4\") pod \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\" (UID: \"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552557 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbqkn\" (UniqueName: \"kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn\") pod \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\" (UID: \"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17\") " Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552971 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552983 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvtg\" (UniqueName: \"kubernetes.io/projected/899af78e-0f52-4b70-8817-47ea4fe4d344-kube-api-access-fhvtg\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552994 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5ln\" (UniqueName: \"kubernetes.io/projected/c786cee2-3b0c-42f3-ba21-c5bb877332ef-kube-api-access-cx5ln\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.552981 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" (UID: "ecd4a05d-c720-48e0-9ef0-1101d4ee0a17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.553005 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/899af78e-0f52-4b70-8817-47ea4fe4d344-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.553058 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.553072 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8pw\" (UniqueName: \"kubernetes.io/projected/4dd945e5-694d-47a5-9817-8e6cff5a1c8b-kube-api-access-cn8pw\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.553085 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg4jb\" (UniqueName: \"kubernetes.io/projected/d6d8310c-c9e5-49cc-bc20-af6aacf1487d-kube-api-access-wg4jb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.553438 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" (UID: "39e4ef39-beda-44bd-bbaf-f3a8a4c8917b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.562935 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn" (OuterVolumeSpecName: "kube-api-access-xbqkn") pod "ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" (UID: "ecd4a05d-c720-48e0-9ef0-1101d4ee0a17"). InnerVolumeSpecName "kube-api-access-xbqkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.565153 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4" (OuterVolumeSpecName: "kube-api-access-zcmh4") pod "1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" (UID: "1b9d5176-c1d3-4862-aa1d-4b0c5c412d48"). InnerVolumeSpecName "kube-api-access-zcmh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.566877 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf" (OuterVolumeSpecName: "kube-api-access-blrwf") pod "39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" (UID: "39e4ef39-beda-44bd-bbaf-f3a8a4c8917b"). InnerVolumeSpecName "kube-api-access-blrwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.640285 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c786cee2-3b0c-42f3-ba21-c5bb877332ef" (UID: "c786cee2-3b0c-42f3-ba21-c5bb877332ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.651031 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config" (OuterVolumeSpecName: "config") pod "c786cee2-3b0c-42f3-ba21-c5bb877332ef" (UID: "c786cee2-3b0c-42f3-ba21-c5bb877332ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654679 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654716 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blrwf\" (UniqueName: \"kubernetes.io/projected/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-kube-api-access-blrwf\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654731 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654744 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654756 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654767 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcmh4\" (UniqueName: \"kubernetes.io/projected/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48-kube-api-access-zcmh4\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.654778 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbqkn\" (UniqueName: \"kubernetes.io/projected/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17-kube-api-access-xbqkn\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.662292 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c786cee2-3b0c-42f3-ba21-c5bb877332ef" (UID: "c786cee2-3b0c-42f3-ba21-c5bb877332ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.724388 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-68cc555589-wskw7" podUID="79178e72-a62d-47ea-ba8c-7dfdf3171258" containerName="console" containerID="cri-o://35855e15ee11fe5131c72f468378961facd97300c5e7c29f11b7ef4fa581684a" gracePeriod=15 Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.756728 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c786cee2-3b0c-42f3-ba21-c5bb877332ef-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.833285 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a47c-account-create-update-fcsjz"] Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.849351 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-8qtg4"] Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.864886 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-8rvhw"] Feb 18 00:47:12 crc kubenswrapper[4847]: I0218 00:47:12.877476 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.056476 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a47c-account-create-update-fcsjz" event={"ID":"4256ea1b-a495-4301-b5b9-1e376c78852e","Type":"ContainerStarted","Data":"f6bbc64b05f5c2203e1384ec229c45b49c47fe951affebe7772a1b78e0e4c3d1"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.058246 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qffjj" event={"ID":"1b9d5176-c1d3-4862-aa1d-4b0c5c412d48","Type":"ContainerDied","Data":"2ed2f3289c21d08b940cfa26fc84eccbf3ce575c781b4c4ec4ba3b7b95bc58de"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.058289 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed2f3289c21d08b940cfa26fc84eccbf3ce575c781b4c4ec4ba3b7b95bc58de" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.058344 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qffjj" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.062124 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" event={"ID":"c786cee2-3b0c-42f3-ba21-c5bb877332ef","Type":"ContainerDied","Data":"40f865ec2751861c902ba9a5974b98c1fa60fb15d7412a1d52178fbb73b82144"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.062172 4847 scope.go:117] "RemoveContainer" containerID="033f04cb72d8bccc691dbe922eb06472bc74f5e37959bdf4c299af13fd4259cd" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.062283 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-vk958" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.071555 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-wskw7_79178e72-a62d-47ea-ba8c-7dfdf3171258/console/0.log" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.071591 4847 generic.go:334] "Generic (PLEG): container finished" podID="79178e72-a62d-47ea-ba8c-7dfdf3171258" containerID="35855e15ee11fe5131c72f468378961facd97300c5e7c29f11b7ef4fa581684a" exitCode=2 Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.071654 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-wskw7" event={"ID":"79178e72-a62d-47ea-ba8c-7dfdf3171258","Type":"ContainerDied","Data":"35855e15ee11fe5131c72f468378961facd97300c5e7c29f11b7ef4fa581684a"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.074631 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" event={"ID":"d6d8310c-c9e5-49cc-bc20-af6aacf1487d","Type":"ContainerDied","Data":"70d998927d3b2d1b781a9115d698376cf238d66e4102bf3e1df0e6cf6dc491a2"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.074655 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d998927d3b2d1b781a9115d698376cf238d66e4102bf3e1df0e6cf6dc491a2" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.074702 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-h4rtr" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.076898 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fd1-account-create-update-5t9jf" event={"ID":"ecd4a05d-c720-48e0-9ef0-1101d4ee0a17","Type":"ContainerDied","Data":"f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.076921 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81987a532a19c4dcaecdadfb670e825691a7f350d28dd1bdb6e2de9e8f25756" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.076956 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fd1-account-create-update-5t9jf" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.078836 4847 generic.go:334] "Generic (PLEG): container finished" podID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerID="7c3054a32e13a3bfd577f6bf0b196211289c855d971f59577f8d7ab705caf8fe" exitCode=0 Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.078886 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" event={"ID":"aa5356b9-df2c-412d-ac6d-4039afc1286b","Type":"ContainerDied","Data":"7c3054a32e13a3bfd577f6bf0b196211289c855d971f59577f8d7ab705caf8fe"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.085998 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qtg4" event={"ID":"cfb5a063-20d2-4791-a141-7e87555bc17d","Type":"ContainerStarted","Data":"6f8cd93da3e8eec9448f69f7008880668fa308ae699a10fbe871d6cb84571455"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.092762 4847 generic.go:334] "Generic (PLEG): container finished" podID="9a17f384-68bb-4ce1-be12-102c477b5968" containerID="9c0a07a32d365b4171acf280d59d30881e56eff9dd6d1ad4ce14824033012b71" exitCode=0 Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.092817 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1377-account-create-update-76bd5" event={"ID":"9a17f384-68bb-4ce1-be12-102c477b5968","Type":"ContainerDied","Data":"9c0a07a32d365b4171acf280d59d30881e56eff9dd6d1ad4ce14824033012b71"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.094509 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qfzh5" event={"ID":"39e4ef39-beda-44bd-bbaf-f3a8a4c8917b","Type":"ContainerDied","Data":"cc6916034656dfe19f0f2a18570a946f8b2362b745386fe9eae71063fe982726"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.094531 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc6916034656dfe19f0f2a18570a946f8b2362b745386fe9eae71063fe982726" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.094582 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qfzh5" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.101035 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wmjzw" event={"ID":"899af78e-0f52-4b70-8817-47ea4fe4d344","Type":"ContainerDied","Data":"7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.101140 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f33f062a84dc4c83b9b41ce2186ac7fae09c021d40b85b24e6c4ad0397afab9" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.101229 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wmjzw" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.108334 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerStarted","Data":"5dd8945c9fed1a5fef0dcfc0c944193448bc995faaf5e716b23e8dee7b71128b"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.127223 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8rvhw" event={"ID":"863d851c-3284-47db-8c80-d5d10f8c2b5c","Type":"ContainerStarted","Data":"a6e3021124f08a66a0722c4143375144fdf1d0c754c8ffe692a675fe1daf758a"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.129058 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.130705 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a882-account-create-update-zprw4" event={"ID":"4dd945e5-694d-47a5-9817-8e6cff5a1c8b","Type":"ContainerDied","Data":"46fcba4fdf626f5da76d33027d0b372ee079e4ae782a197592bd3071b3ab18fa"} Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.130732 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46fcba4fdf626f5da76d33027d0b372ee079e4ae782a197592bd3071b3ab18fa" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.163241 4847 scope.go:117] "RemoveContainer" containerID="920f2506e8d47f1682c23ebabeeb2e218f5769dadaea720584eb48aa92a4ed65" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.220083 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.229719 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-vk958"] Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.232168 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-wskw7_79178e72-a62d-47ea-ba8c-7dfdf3171258/console/0.log" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.232235 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.364526 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8blx6\" (UniqueName: \"kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.364815 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.364871 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.364899 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.364992 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365078 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365104 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert\") pod \"79178e72-a62d-47ea-ba8c-7dfdf3171258\" (UID: \"79178e72-a62d-47ea-ba8c-7dfdf3171258\") " Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365493 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365544 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca" (OuterVolumeSpecName: "service-ca") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365992 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.365826 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config" (OuterVolumeSpecName: "console-config") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.369819 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.369953 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.370642 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6" (OuterVolumeSpecName: "kube-api-access-8blx6") pod "79178e72-a62d-47ea-ba8c-7dfdf3171258" (UID: "79178e72-a62d-47ea-ba8c-7dfdf3171258"). InnerVolumeSpecName "kube-api-access-8blx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.416763 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" path="/var/lib/kubelet/pods/c786cee2-3b0c-42f3-ba21-c5bb877332ef/volumes" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467290 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8blx6\" (UniqueName: \"kubernetes.io/projected/79178e72-a62d-47ea-ba8c-7dfdf3171258-kube-api-access-8blx6\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467330 4847 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467343 4847 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467357 4847 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467370 4847 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467384 4847 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79178e72-a62d-47ea-ba8c-7dfdf3171258-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:13 crc kubenswrapper[4847]: I0218 00:47:13.467396 4847 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79178e72-a62d-47ea-ba8c-7dfdf3171258-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.139370 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68cc555589-wskw7_79178e72-a62d-47ea-ba8c-7dfdf3171258/console/0.log" Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.139447 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68cc555589-wskw7" event={"ID":"79178e72-a62d-47ea-ba8c-7dfdf3171258","Type":"ContainerDied","Data":"238e7763895802127a1b2692078b46ec6712c243cd612661f7556a5310fe0f5e"} Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.139493 4847 scope.go:117] "RemoveContainer" containerID="35855e15ee11fe5131c72f468378961facd97300c5e7c29f11b7ef4fa581684a" Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.139518 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68cc555589-wskw7" Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.142105 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" event={"ID":"aa5356b9-df2c-412d-ac6d-4039afc1286b","Type":"ContainerStarted","Data":"9cd439d3918d3de44fefad060cafc1243cef2a67a0f638ddf23cdb6db8425907"} Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.142303 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.144858 4847 generic.go:334] "Generic (PLEG): container finished" podID="cfb5a063-20d2-4791-a141-7e87555bc17d" containerID="8f9a33a665621e44dc52192e579be159d8e9593d68e6567ffbf97c1e6d9cc0e3" exitCode=0 Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.144913 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qtg4" event={"ID":"cfb5a063-20d2-4791-a141-7e87555bc17d","Type":"ContainerDied","Data":"8f9a33a665621e44dc52192e579be159d8e9593d68e6567ffbf97c1e6d9cc0e3"} Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.148843 4847 generic.go:334] "Generic (PLEG): container finished" podID="4256ea1b-a495-4301-b5b9-1e376c78852e" containerID="3388dd244b26737d6be8a2c57d074c37cac5aaa2d3dcedeaa5f3518a164dee09" exitCode=0 Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.148937 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a47c-account-create-update-fcsjz" event={"ID":"4256ea1b-a495-4301-b5b9-1e376c78852e","Type":"ContainerDied","Data":"3388dd244b26737d6be8a2c57d074c37cac5aaa2d3dcedeaa5f3518a164dee09"} Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.182660 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.197467 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-68cc555589-wskw7"] Feb 18 00:47:14 crc kubenswrapper[4847]: I0218 00:47:14.199618 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" podStartSLOduration=8.199585411 podStartE2EDuration="8.199585411s" podCreationTimestamp="2026-02-18 00:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:14.185468487 +0000 UTC m=+1307.562819479" watchObservedRunningTime="2026-02-18 00:47:14.199585411 +0000 UTC m=+1307.576936353" Feb 18 00:47:15 crc kubenswrapper[4847]: I0218 00:47:15.414110 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79178e72-a62d-47ea-ba8c-7dfdf3171258" path="/var/lib/kubelet/pods/79178e72-a62d-47ea-ba8c-7dfdf3171258/volumes" Feb 18 00:47:15 crc kubenswrapper[4847]: I0218 00:47:15.714708 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:15 crc kubenswrapper[4847]: E0218 00:47:15.714885 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:15 crc kubenswrapper[4847]: E0218 00:47:15.715068 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:15 crc kubenswrapper[4847]: E0218 00:47:15.715133 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:23.715111005 +0000 UTC m=+1317.092461947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:15 crc kubenswrapper[4847]: I0218 00:47:15.958654 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qfzh5"] Feb 18 00:47:15 crc kubenswrapper[4847]: I0218 00:47:15.967047 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qfzh5"] Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.224728 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerStarted","Data":"9b833e15ee94a91431b1f7cd984e8f8d1f794fc25fe0aa85a270e7ba875700da"} Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766173 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7"] Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766555 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d8310c-c9e5-49cc-bc20-af6aacf1487d" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766567 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d8310c-c9e5-49cc-bc20-af6aacf1487d" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766576 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="dnsmasq-dns" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766581 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="dnsmasq-dns" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766595 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766615 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766621 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd945e5-694d-47a5-9817-8e6cff5a1c8b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766627 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd945e5-694d-47a5-9817-8e6cff5a1c8b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766639 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="init" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766645 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="init" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766658 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79178e72-a62d-47ea-ba8c-7dfdf3171258" containerName="console" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766664 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="79178e72-a62d-47ea-ba8c-7dfdf3171258" containerName="console" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766685 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766690 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766699 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766706 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: E0218 00:47:16.766716 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899af78e-0f52-4b70-8817-47ea4fe4d344" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766722 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="899af78e-0f52-4b70-8817-47ea4fe4d344" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766867 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd945e5-694d-47a5-9817-8e6cff5a1c8b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766892 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="899af78e-0f52-4b70-8817-47ea4fe4d344" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766901 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="79178e72-a62d-47ea-ba8c-7dfdf3171258" containerName="console" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766912 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d8310c-c9e5-49cc-bc20-af6aacf1487d" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766920 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c786cee2-3b0c-42f3-ba21-c5bb877332ef" containerName="dnsmasq-dns" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766930 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766942 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" containerName="mariadb-database-create" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.766953 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" containerName="mariadb-account-create-update" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.767588 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.796315 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7"] Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.960680 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.961015 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4d2\" (UniqueName: \"kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.983332 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-e8b0-account-create-update-qnp69"] Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.984999 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.986849 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 18 00:47:16 crc kubenswrapper[4847]: I0218 00:47:16.993068 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e8b0-account-create-update-qnp69"] Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.062654 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.062983 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq4d2\" (UniqueName: \"kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.073826 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.084424 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq4d2\" (UniqueName: \"kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2\") pod \"mysqld-exporter-openstack-cell1-db-create-wjfr7\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.109254 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.165396 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bvd\" (UniqueName: \"kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.165633 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.245090 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1377-account-create-update-76bd5" event={"ID":"9a17f384-68bb-4ce1-be12-102c477b5968","Type":"ContainerDied","Data":"b0126415fd97ccd5017cca45f34db9aadbb8d794e52647ffcdec3cf4b9f9b3d5"} Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.245470 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0126415fd97ccd5017cca45f34db9aadbb8d794e52647ffcdec3cf4b9f9b3d5" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.248303 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-8qtg4" event={"ID":"cfb5a063-20d2-4791-a141-7e87555bc17d","Type":"ContainerDied","Data":"6f8cd93da3e8eec9448f69f7008880668fa308ae699a10fbe871d6cb84571455"} Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.248342 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8cd93da3e8eec9448f69f7008880668fa308ae699a10fbe871d6cb84571455" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.253588 4847 generic.go:334] "Generic (PLEG): container finished" podID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerID="fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9" exitCode=0 Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.253711 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerDied","Data":"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9"} Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.261268 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a47c-account-create-update-fcsjz" event={"ID":"4256ea1b-a495-4301-b5b9-1e376c78852e","Type":"ContainerDied","Data":"f6bbc64b05f5c2203e1384ec229c45b49c47fe951affebe7772a1b78e0e4c3d1"} Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.261304 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6bbc64b05f5c2203e1384ec229c45b49c47fe951affebe7772a1b78e0e4c3d1" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.267774 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bvd\" (UniqueName: \"kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.267924 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.268745 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.294405 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.294483 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bvd\" (UniqueName: \"kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd\") pod \"mysqld-exporter-e8b0-account-create-update-qnp69\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.297037 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.302854 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.319248 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.419787 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39e4ef39-beda-44bd-bbaf-f3a8a4c8917b" path="/var/lib/kubelet/pods/39e4ef39-beda-44bd-bbaf-f3a8a4c8917b/volumes" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.470647 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbx5b\" (UniqueName: \"kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b\") pod \"4256ea1b-a495-4301-b5b9-1e376c78852e\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.471065 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn2fx\" (UniqueName: \"kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx\") pod \"cfb5a063-20d2-4791-a141-7e87555bc17d\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.471118 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts\") pod \"4256ea1b-a495-4301-b5b9-1e376c78852e\" (UID: \"4256ea1b-a495-4301-b5b9-1e376c78852e\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.471231 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jpx5\" (UniqueName: \"kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5\") pod \"9a17f384-68bb-4ce1-be12-102c477b5968\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.471293 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts\") pod \"cfb5a063-20d2-4791-a141-7e87555bc17d\" (UID: \"cfb5a063-20d2-4791-a141-7e87555bc17d\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.472051 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts\") pod \"9a17f384-68bb-4ce1-be12-102c477b5968\" (UID: \"9a17f384-68bb-4ce1-be12-102c477b5968\") " Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.472326 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4256ea1b-a495-4301-b5b9-1e376c78852e" (UID: "4256ea1b-a495-4301-b5b9-1e376c78852e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.472397 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfb5a063-20d2-4791-a141-7e87555bc17d" (UID: "cfb5a063-20d2-4791-a141-7e87555bc17d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.472917 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4256ea1b-a495-4301-b5b9-1e376c78852e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.472935 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb5a063-20d2-4791-a141-7e87555bc17d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.475933 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a17f384-68bb-4ce1-be12-102c477b5968" (UID: "9a17f384-68bb-4ce1-be12-102c477b5968"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.477847 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b" (OuterVolumeSpecName: "kube-api-access-kbx5b") pod "4256ea1b-a495-4301-b5b9-1e376c78852e" (UID: "4256ea1b-a495-4301-b5b9-1e376c78852e"). InnerVolumeSpecName "kube-api-access-kbx5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.482889 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5" (OuterVolumeSpecName: "kube-api-access-4jpx5") pod "9a17f384-68bb-4ce1-be12-102c477b5968" (UID: "9a17f384-68bb-4ce1-be12-102c477b5968"). InnerVolumeSpecName "kube-api-access-4jpx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.486033 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx" (OuterVolumeSpecName: "kube-api-access-qn2fx") pod "cfb5a063-20d2-4791-a141-7e87555bc17d" (UID: "cfb5a063-20d2-4791-a141-7e87555bc17d"). InnerVolumeSpecName "kube-api-access-qn2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.529836 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nsf78"] Feb 18 00:47:17 crc kubenswrapper[4847]: E0218 00:47:17.530220 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb5a063-20d2-4791-a141-7e87555bc17d" containerName="mariadb-database-create" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530236 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb5a063-20d2-4791-a141-7e87555bc17d" containerName="mariadb-database-create" Feb 18 00:47:17 crc kubenswrapper[4847]: E0218 00:47:17.530247 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4256ea1b-a495-4301-b5b9-1e376c78852e" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530254 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4256ea1b-a495-4301-b5b9-1e376c78852e" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: E0218 00:47:17.530283 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a17f384-68bb-4ce1-be12-102c477b5968" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530289 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a17f384-68bb-4ce1-be12-102c477b5968" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530488 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a17f384-68bb-4ce1-be12-102c477b5968" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530502 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb5a063-20d2-4791-a141-7e87555bc17d" containerName="mariadb-database-create" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.530521 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4256ea1b-a495-4301-b5b9-1e376c78852e" containerName="mariadb-account-create-update" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.531139 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.533830 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.540537 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nsf78"] Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.574255 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbx5b\" (UniqueName: \"kubernetes.io/projected/4256ea1b-a495-4301-b5b9-1e376c78852e-kube-api-access-kbx5b\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.574284 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn2fx\" (UniqueName: \"kubernetes.io/projected/cfb5a063-20d2-4791-a141-7e87555bc17d-kube-api-access-qn2fx\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.574293 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jpx5\" (UniqueName: \"kubernetes.io/projected/9a17f384-68bb-4ce1-be12-102c477b5968-kube-api-access-4jpx5\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.574323 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a17f384-68bb-4ce1-be12-102c477b5968-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.644204 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7"] Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.677798 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc5g5\" (UniqueName: \"kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.677864 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.779329 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc5g5\" (UniqueName: \"kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.779394 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.780169 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.803825 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc5g5\" (UniqueName: \"kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5\") pod \"root-account-create-update-nsf78\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.848494 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:17 crc kubenswrapper[4847]: I0218 00:47:17.892419 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-e8b0-account-create-update-qnp69"] Feb 18 00:47:17 crc kubenswrapper[4847]: W0218 00:47:17.910738 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod815709d3_9a7d_4e0e_a44e_a60ad1428919.slice/crio-c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90 WatchSource:0}: Error finding container c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90: Status 404 returned error can't find the container with id c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90 Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.270676 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerStarted","Data":"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.271085 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.273446 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8rvhw" event={"ID":"863d851c-3284-47db-8c80-d5d10f8c2b5c","Type":"ContainerStarted","Data":"48e8b9ca24092ea98f3e322c1436b83120d7db5ac593e560c78f6b4ebea9c86b"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.275630 4847 generic.go:334] "Generic (PLEG): container finished" podID="e79eaa76-7d45-436b-a23f-157ce98678ba" containerID="90c199098c0b02ac71b902575c68f3c65279767844ac84093136fda9f2f10b27" exitCode=0 Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.275676 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" event={"ID":"e79eaa76-7d45-436b-a23f-157ce98678ba","Type":"ContainerDied","Data":"90c199098c0b02ac71b902575c68f3c65279767844ac84093136fda9f2f10b27"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.275693 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" event={"ID":"e79eaa76-7d45-436b-a23f-157ce98678ba","Type":"ContainerStarted","Data":"84977cb50977bb30c765c69dd6c5e10c9296c9404ef04c02ddc6f137000f0c2f"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.277174 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1377-account-create-update-76bd5" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.277200 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a47c-account-create-update-fcsjz" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.277226 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-8qtg4" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.277252 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" event={"ID":"815709d3-9a7d-4e0e-a44e-a60ad1428919","Type":"ContainerStarted","Data":"bec2c0fe75fb488f5c4425186ceb2944d2c2f4b0024288d2a418007b0a5a5b16"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.277329 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" event={"ID":"815709d3-9a7d-4e0e-a44e-a60ad1428919","Type":"ContainerStarted","Data":"c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90"} Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.323373 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.763031762 podStartE2EDuration="59.323355983s" podCreationTimestamp="2026-02-18 00:46:19 +0000 UTC" firstStartedPulling="2026-02-18 00:46:21.924972063 +0000 UTC m=+1255.302323005" lastFinishedPulling="2026-02-18 00:46:43.485296284 +0000 UTC m=+1276.862647226" observedRunningTime="2026-02-18 00:47:18.310174401 +0000 UTC m=+1311.687525343" watchObservedRunningTime="2026-02-18 00:47:18.323355983 +0000 UTC m=+1311.700706925" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.343092 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" podStartSLOduration=2.3430729599999998 podStartE2EDuration="2.34307296s" podCreationTimestamp="2026-02-18 00:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:18.327226065 +0000 UTC m=+1311.704577007" watchObservedRunningTime="2026-02-18 00:47:18.34307296 +0000 UTC m=+1311.720423902" Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.373545 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-8rvhw" podStartSLOduration=3.122892135 podStartE2EDuration="7.373522611s" podCreationTimestamp="2026-02-18 00:47:11 +0000 UTC" firstStartedPulling="2026-02-18 00:47:12.877307302 +0000 UTC m=+1306.254658244" lastFinishedPulling="2026-02-18 00:47:17.127937778 +0000 UTC m=+1310.505288720" observedRunningTime="2026-02-18 00:47:18.359162271 +0000 UTC m=+1311.736513213" watchObservedRunningTime="2026-02-18 00:47:18.373522611 +0000 UTC m=+1311.750873553" Feb 18 00:47:18 crc kubenswrapper[4847]: W0218 00:47:18.409540 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod811aea20_9057_472e_9a14_e2f04ad204cd.slice/crio-f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa WatchSource:0}: Error finding container f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa: Status 404 returned error can't find the container with id f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.416299 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nsf78"] Feb 18 00:47:18 crc kubenswrapper[4847]: I0218 00:47:18.566348 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.286465 4847 generic.go:334] "Generic (PLEG): container finished" podID="815709d3-9a7d-4e0e-a44e-a60ad1428919" containerID="bec2c0fe75fb488f5c4425186ceb2944d2c2f4b0024288d2a418007b0a5a5b16" exitCode=0 Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.286523 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" event={"ID":"815709d3-9a7d-4e0e-a44e-a60ad1428919","Type":"ContainerDied","Data":"bec2c0fe75fb488f5c4425186ceb2944d2c2f4b0024288d2a418007b0a5a5b16"} Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.289148 4847 generic.go:334] "Generic (PLEG): container finished" podID="811aea20-9057-472e-9a14-e2f04ad204cd" containerID="c969932639a6afbb90efda97d2de65bcf1c1bf97985356a720ae9cc66837c67d" exitCode=0 Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.289320 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nsf78" event={"ID":"811aea20-9057-472e-9a14-e2f04ad204cd","Type":"ContainerDied","Data":"c969932639a6afbb90efda97d2de65bcf1c1bf97985356a720ae9cc66837c67d"} Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.289350 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nsf78" event={"ID":"811aea20-9057-472e-9a14-e2f04ad204cd","Type":"ContainerStarted","Data":"f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa"} Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.648849 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qldtf"] Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.650280 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.652808 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.653922 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6db67" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.706980 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qldtf"] Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.746622 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.746663 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.746695 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld52g\" (UniqueName: \"kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.746802 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.848377 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.848506 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.848526 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.848554 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld52g\" (UniqueName: \"kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.859532 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.860080 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.860889 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.877387 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld52g\" (UniqueName: \"kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g\") pod \"glance-db-sync-qldtf\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:19 crc kubenswrapper[4847]: I0218 00:47:19.965046 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:21 crc kubenswrapper[4847]: I0218 00:47:21.907787 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:47:21 crc kubenswrapper[4847]: I0218 00:47:21.971288 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:47:21 crc kubenswrapper[4847]: I0218 00:47:21.971543 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-k9ngj" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="dnsmasq-dns" containerID="cri-o://54888030145adc68ec0c91dbb42e7189a0e53a67b037568a91cbb8747dbc0545" gracePeriod=10 Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.323412 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" event={"ID":"815709d3-9a7d-4e0e-a44e-a60ad1428919","Type":"ContainerDied","Data":"c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90"} Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.323455 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b2a6d21313d9b4a73b3b7cd7791410e910eca0bf26b3ef58d146261be93a90" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.326732 4847 generic.go:334] "Generic (PLEG): container finished" podID="3404c138-4060-43de-9cc5-d6017b245f2c" containerID="54888030145adc68ec0c91dbb42e7189a0e53a67b037568a91cbb8747dbc0545" exitCode=0 Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.326808 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k9ngj" event={"ID":"3404c138-4060-43de-9cc5-d6017b245f2c","Type":"ContainerDied","Data":"54888030145adc68ec0c91dbb42e7189a0e53a67b037568a91cbb8747dbc0545"} Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.334217 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" event={"ID":"e79eaa76-7d45-436b-a23f-157ce98678ba","Type":"ContainerDied","Data":"84977cb50977bb30c765c69dd6c5e10c9296c9404ef04c02ddc6f137000f0c2f"} Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.334266 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84977cb50977bb30c765c69dd6c5e10c9296c9404ef04c02ddc6f137000f0c2f" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.517879 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.522489 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.537668 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.624404 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq4d2\" (UniqueName: \"kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2\") pod \"e79eaa76-7d45-436b-a23f-157ce98678ba\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.624510 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2bvd\" (UniqueName: \"kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd\") pod \"815709d3-9a7d-4e0e-a44e-a60ad1428919\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.624670 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts\") pod \"e79eaa76-7d45-436b-a23f-157ce98678ba\" (UID: \"e79eaa76-7d45-436b-a23f-157ce98678ba\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.624699 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts\") pod \"815709d3-9a7d-4e0e-a44e-a60ad1428919\" (UID: \"815709d3-9a7d-4e0e-a44e-a60ad1428919\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.625526 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "815709d3-9a7d-4e0e-a44e-a60ad1428919" (UID: "815709d3-9a7d-4e0e-a44e-a60ad1428919"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.627082 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e79eaa76-7d45-436b-a23f-157ce98678ba" (UID: "e79eaa76-7d45-436b-a23f-157ce98678ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.631327 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2" (OuterVolumeSpecName: "kube-api-access-hq4d2") pod "e79eaa76-7d45-436b-a23f-157ce98678ba" (UID: "e79eaa76-7d45-436b-a23f-157ce98678ba"). InnerVolumeSpecName "kube-api-access-hq4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.633268 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd" (OuterVolumeSpecName: "kube-api-access-t2bvd") pod "815709d3-9a7d-4e0e-a44e-a60ad1428919" (UID: "815709d3-9a7d-4e0e-a44e-a60ad1428919"). InnerVolumeSpecName "kube-api-access-t2bvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726360 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts\") pod \"811aea20-9057-472e-9a14-e2f04ad204cd\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726548 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc5g5\" (UniqueName: \"kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5\") pod \"811aea20-9057-472e-9a14-e2f04ad204cd\" (UID: \"811aea20-9057-472e-9a14-e2f04ad204cd\") " Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726942 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2bvd\" (UniqueName: \"kubernetes.io/projected/815709d3-9a7d-4e0e-a44e-a60ad1428919-kube-api-access-t2bvd\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726959 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e79eaa76-7d45-436b-a23f-157ce98678ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726969 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/815709d3-9a7d-4e0e-a44e-a60ad1428919-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.726978 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq4d2\" (UniqueName: \"kubernetes.io/projected/e79eaa76-7d45-436b-a23f-157ce98678ba-kube-api-access-hq4d2\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.727469 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "811aea20-9057-472e-9a14-e2f04ad204cd" (UID: "811aea20-9057-472e-9a14-e2f04ad204cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.731835 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5" (OuterVolumeSpecName: "kube-api-access-mc5g5") pod "811aea20-9057-472e-9a14-e2f04ad204cd" (UID: "811aea20-9057-472e-9a14-e2f04ad204cd"). InnerVolumeSpecName "kube-api-access-mc5g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.828274 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/811aea20-9057-472e-9a14-e2f04ad204cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.828480 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc5g5\" (UniqueName: \"kubernetes.io/projected/811aea20-9057-472e-9a14-e2f04ad204cd-kube-api-access-mc5g5\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:22 crc kubenswrapper[4847]: I0218 00:47:22.870180 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.031947 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb\") pod \"3404c138-4060-43de-9cc5-d6017b245f2c\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.032624 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config\") pod \"3404c138-4060-43de-9cc5-d6017b245f2c\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.032662 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc\") pod \"3404c138-4060-43de-9cc5-d6017b245f2c\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.032705 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlm6r\" (UniqueName: \"kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r\") pod \"3404c138-4060-43de-9cc5-d6017b245f2c\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.032793 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb\") pod \"3404c138-4060-43de-9cc5-d6017b245f2c\" (UID: \"3404c138-4060-43de-9cc5-d6017b245f2c\") " Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.038207 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r" (OuterVolumeSpecName: "kube-api-access-wlm6r") pod "3404c138-4060-43de-9cc5-d6017b245f2c" (UID: "3404c138-4060-43de-9cc5-d6017b245f2c"). InnerVolumeSpecName "kube-api-access-wlm6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.052331 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qldtf"] Feb 18 00:47:23 crc kubenswrapper[4847]: W0218 00:47:23.059868 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60c4f757_8241_4268_92af_da05a6e0217e.slice/crio-fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23 WatchSource:0}: Error finding container fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23: Status 404 returned error can't find the container with id fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23 Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.085032 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3404c138-4060-43de-9cc5-d6017b245f2c" (UID: "3404c138-4060-43de-9cc5-d6017b245f2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.091290 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3404c138-4060-43de-9cc5-d6017b245f2c" (UID: "3404c138-4060-43de-9cc5-d6017b245f2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.096583 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3404c138-4060-43de-9cc5-d6017b245f2c" (UID: "3404c138-4060-43de-9cc5-d6017b245f2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.104225 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config" (OuterVolumeSpecName: "config") pod "3404c138-4060-43de-9cc5-d6017b245f2c" (UID: "3404c138-4060-43de-9cc5-d6017b245f2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.135121 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.135167 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.135177 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlm6r\" (UniqueName: \"kubernetes.io/projected/3404c138-4060-43de-9cc5-d6017b245f2c-kube-api-access-wlm6r\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.135188 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.135196 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3404c138-4060-43de-9cc5-d6017b245f2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.347580 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-k9ngj" event={"ID":"3404c138-4060-43de-9cc5-d6017b245f2c","Type":"ContainerDied","Data":"ca1b50dd2c0fc04b11e7b6b959e90324b2f3b3ddea69388663b22990db92fd51"} Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.347634 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-k9ngj" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.347646 4847 scope.go:117] "RemoveContainer" containerID="54888030145adc68ec0c91dbb42e7189a0e53a67b037568a91cbb8747dbc0545" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.348850 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qldtf" event={"ID":"60c4f757-8241-4268-92af-da05a6e0217e","Type":"ContainerStarted","Data":"fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23"} Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.351271 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nsf78" event={"ID":"811aea20-9057-472e-9a14-e2f04ad204cd","Type":"ContainerDied","Data":"f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa"} Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.351334 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ea9560bc92a9793eb00830bb8713fd9c0bd9928d76778fc2718e2ed1894baa" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.351383 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nsf78" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.361052 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerStarted","Data":"1ac9c5e88215432ae3f672dcbe3ce135c29b1ad1ba1fdd3d7185b08c23ac44be"} Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.361083 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-e8b0-account-create-update-qnp69" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.361116 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.381336 4847 scope.go:117] "RemoveContainer" containerID="09c81b89f7191b6fee222a821018edf1835400878ae32eca27fb0d6111a8218e" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.407012 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=19.414134125 podStartE2EDuration="57.406992232s" podCreationTimestamp="2026-02-18 00:46:26 +0000 UTC" firstStartedPulling="2026-02-18 00:46:44.379316213 +0000 UTC m=+1277.756667155" lastFinishedPulling="2026-02-18 00:47:22.37217432 +0000 UTC m=+1315.749525262" observedRunningTime="2026-02-18 00:47:23.40058919 +0000 UTC m=+1316.777940132" watchObservedRunningTime="2026-02-18 00:47:23.406992232 +0000 UTC m=+1316.784343174" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.457902 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.472516 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-k9ngj"] Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.491438 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.491505 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.491551 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.492391 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.492552 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764" gracePeriod=600 Feb 18 00:47:23 crc kubenswrapper[4847]: I0218 00:47:23.754508 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:23 crc kubenswrapper[4847]: E0218 00:47:23.754727 4847 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 18 00:47:23 crc kubenswrapper[4847]: E0218 00:47:23.754746 4847 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 18 00:47:23 crc kubenswrapper[4847]: E0218 00:47:23.754809 4847 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift podName:623045fa-a3f1-4ad5-a5f7-361f31303bfb nodeName:}" failed. No retries permitted until 2026-02-18 00:47:39.754790587 +0000 UTC m=+1333.132141529 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift") pod "swift-storage-0" (UID: "623045fa-a3f1-4ad5-a5f7-361f31303bfb") : configmap "swift-ring-files" not found Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.373317 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764" exitCode=0 Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.373661 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764"} Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.373787 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d"} Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.373819 4847 scope.go:117] "RemoveContainer" containerID="0fd06824414c18aeb73533601d48a5d63e6df2929401b5f19f7490f5ebb56186" Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.516659 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xh6ft" podUID="2801a17e-6108-4ffe-9eac-7068b93707e1" containerName="ovn-controller" probeResult="failure" output=< Feb 18 00:47:24 crc kubenswrapper[4847]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 00:47:24 crc kubenswrapper[4847]: > Feb 18 00:47:24 crc kubenswrapper[4847]: I0218 00:47:24.548091 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:47:25 crc kubenswrapper[4847]: I0218 00:47:25.392862 4847 generic.go:334] "Generic (PLEG): container finished" podID="863d851c-3284-47db-8c80-d5d10f8c2b5c" containerID="48e8b9ca24092ea98f3e322c1436b83120d7db5ac593e560c78f6b4ebea9c86b" exitCode=0 Feb 18 00:47:25 crc kubenswrapper[4847]: I0218 00:47:25.392914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8rvhw" event={"ID":"863d851c-3284-47db-8c80-d5d10f8c2b5c","Type":"ContainerDied","Data":"48e8b9ca24092ea98f3e322c1436b83120d7db5ac593e560c78f6b4ebea9c86b"} Feb 18 00:47:25 crc kubenswrapper[4847]: I0218 00:47:25.419521 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" path="/var/lib/kubelet/pods/3404c138-4060-43de-9cc5-d6017b245f2c/volumes" Feb 18 00:47:26 crc kubenswrapper[4847]: I0218 00:47:26.002807 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nsf78"] Feb 18 00:47:26 crc kubenswrapper[4847]: I0218 00:47:26.017059 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nsf78"] Feb 18 00:47:26 crc kubenswrapper[4847]: I0218 00:47:26.842879 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016017 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016075 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016116 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016153 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016213 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016276 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016363 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b658n\" (UniqueName: \"kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n\") pod \"863d851c-3284-47db-8c80-d5d10f8c2b5c\" (UID: \"863d851c-3284-47db-8c80-d5d10f8c2b5c\") " Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.016853 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.020812 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.023769 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n" (OuterVolumeSpecName: "kube-api-access-b658n") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "kube-api-access-b658n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.027001 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.043790 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts" (OuterVolumeSpecName: "scripts") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.047029 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.062300 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "863d851c-3284-47db-8c80-d5d10f8c2b5c" (UID: "863d851c-3284-47db-8c80-d5d10f8c2b5c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.118965 4847 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/863d851c-3284-47db-8c80-d5d10f8c2b5c-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119265 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b658n\" (UniqueName: \"kubernetes.io/projected/863d851c-3284-47db-8c80-d5d10f8c2b5c-kube-api-access-b658n\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119276 4847 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119285 4847 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119296 4847 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119305 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/863d851c-3284-47db-8c80-d5d10f8c2b5c-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.119313 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/863d851c-3284-47db-8c80-d5d10f8c2b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170308 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170679 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79eaa76-7d45-436b-a23f-157ce98678ba" containerName="mariadb-database-create" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170692 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79eaa76-7d45-436b-a23f-157ce98678ba" containerName="mariadb-database-create" Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170709 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863d851c-3284-47db-8c80-d5d10f8c2b5c" containerName="swift-ring-rebalance" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170715 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="863d851c-3284-47db-8c80-d5d10f8c2b5c" containerName="swift-ring-rebalance" Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170734 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="init" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170739 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="init" Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170752 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="811aea20-9057-472e-9a14-e2f04ad204cd" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170758 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="811aea20-9057-472e-9a14-e2f04ad204cd" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170768 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815709d3-9a7d-4e0e-a44e-a60ad1428919" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170774 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="815709d3-9a7d-4e0e-a44e-a60ad1428919" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: E0218 00:47:27.170791 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="dnsmasq-dns" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170796 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="dnsmasq-dns" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170951 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="863d851c-3284-47db-8c80-d5d10f8c2b5c" containerName="swift-ring-rebalance" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170967 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="815709d3-9a7d-4e0e-a44e-a60ad1428919" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170978 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="811aea20-9057-472e-9a14-e2f04ad204cd" containerName="mariadb-account-create-update" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.170990 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79eaa76-7d45-436b-a23f-157ce98678ba" containerName="mariadb-database-create" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.171004 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3404c138-4060-43de-9cc5-d6017b245f2c" containerName="dnsmasq-dns" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.171617 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.175823 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.212809 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.327109 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbp9l\" (UniqueName: \"kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.327216 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.327483 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.419587 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="811aea20-9057-472e-9a14-e2f04ad204cd" path="/var/lib/kubelet/pods/811aea20-9057-472e-9a14-e2f04ad204cd/volumes" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.419926 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-8rvhw" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.420289 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-8rvhw" event={"ID":"863d851c-3284-47db-8c80-d5d10f8c2b5c","Type":"ContainerDied","Data":"a6e3021124f08a66a0722c4143375144fdf1d0c754c8ffe692a675fe1daf758a"} Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.420317 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6e3021124f08a66a0722c4143375144fdf1d0c754c8ffe692a675fe1daf758a" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.428956 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbp9l\" (UniqueName: \"kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.429142 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.429233 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.437217 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.446581 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.447944 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.452168 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbp9l\" (UniqueName: \"kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l\") pod \"mysqld-exporter-0\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.524966 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.599510 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8hbdg"] Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.601169 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.613856 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.614833 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8hbdg"] Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.632854 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.633106 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.638754 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.735737 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9gx\" (UniqueName: \"kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.735852 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.839824 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9gx\" (UniqueName: \"kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.839920 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.842356 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.859857 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9gx\" (UniqueName: \"kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx\") pod \"root-account-create-update-8hbdg\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:27 crc kubenswrapper[4847]: I0218 00:47:27.926139 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:28 crc kubenswrapper[4847]: I0218 00:47:28.048591 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:47:28 crc kubenswrapper[4847]: I0218 00:47:28.430472 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"81f5a395-7a57-4aac-9c38-35207716eb18","Type":"ContainerStarted","Data":"65e22442076f115d1b32dc44b1d82890665225931824a302cc28372ac880a000"} Feb 18 00:47:28 crc kubenswrapper[4847]: I0218 00:47:28.432434 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:28 crc kubenswrapper[4847]: I0218 00:47:28.498059 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8hbdg"] Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.439945 4847 generic.go:334] "Generic (PLEG): container finished" podID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerID="d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918" exitCode=0 Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.440042 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerDied","Data":"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918"} Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.445357 4847 generic.go:334] "Generic (PLEG): container finished" podID="12c1f1eb-8e51-4f05-b931-070a5a0612af" containerID="6be4f2fc66f2c95cd71292cb373c9db0c755a4cbcb5ca698963e6c82a8ceb663" exitCode=0 Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.445440 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8hbdg" event={"ID":"12c1f1eb-8e51-4f05-b931-070a5a0612af","Type":"ContainerDied","Data":"6be4f2fc66f2c95cd71292cb373c9db0c755a4cbcb5ca698963e6c82a8ceb663"} Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.445469 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8hbdg" event={"ID":"12c1f1eb-8e51-4f05-b931-070a5a0612af","Type":"ContainerStarted","Data":"dda2ad1e3c283bec779a35236decb920e40ade0111fe41551702d9af13ea6fce"} Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.520260 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xh6ft" podUID="2801a17e-6108-4ffe-9eac-7068b93707e1" containerName="ovn-controller" probeResult="failure" output=< Feb 18 00:47:29 crc kubenswrapper[4847]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 18 00:47:29 crc kubenswrapper[4847]: > Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.550880 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h5k8p" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.748933 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xh6ft-config-thv2s"] Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.753509 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.756256 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.759050 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xh6ft-config-thv2s"] Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.901492 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.901790 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.901877 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.901934 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2742b\" (UniqueName: \"kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.901985 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:29 crc kubenswrapper[4847]: I0218 00:47:29.902030 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.003460 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.003789 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.003804 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.003870 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.003937 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2742b\" (UniqueName: \"kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.004032 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.004104 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.004361 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.004558 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.005293 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.006288 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.035993 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2742b\" (UniqueName: \"kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b\") pod \"ovn-controller-xh6ft-config-thv2s\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.086397 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.455813 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"81f5a395-7a57-4aac-9c38-35207716eb18","Type":"ContainerStarted","Data":"a424ae1212c82519dc92eda5fe818e5dd7409135ce09d1fa203fd98a9bde5015"} Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.459528 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerStarted","Data":"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece"} Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.460587 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.474896 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.510245826 podStartE2EDuration="3.474872734s" podCreationTimestamp="2026-02-18 00:47:27 +0000 UTC" firstStartedPulling="2026-02-18 00:47:28.085723544 +0000 UTC m=+1321.463074486" lastFinishedPulling="2026-02-18 00:47:30.050350452 +0000 UTC m=+1323.427701394" observedRunningTime="2026-02-18 00:47:30.471089414 +0000 UTC m=+1323.848440366" watchObservedRunningTime="2026-02-18 00:47:30.474872734 +0000 UTC m=+1323.852223686" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.524773 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371965.330019 podStartE2EDuration="1m11.524756675s" podCreationTimestamp="2026-02-18 00:46:19 +0000 UTC" firstStartedPulling="2026-02-18 00:46:21.346895016 +0000 UTC m=+1254.724245958" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:30.522637545 +0000 UTC m=+1323.899988487" watchObservedRunningTime="2026-02-18 00:47:30.524756675 +0000 UTC m=+1323.902107617" Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.577872 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xh6ft-config-thv2s"] Feb 18 00:47:30 crc kubenswrapper[4847]: I0218 00:47:30.915022 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.030387 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts\") pod \"12c1f1eb-8e51-4f05-b931-070a5a0612af\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.030531 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s9gx\" (UniqueName: \"kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx\") pod \"12c1f1eb-8e51-4f05-b931-070a5a0612af\" (UID: \"12c1f1eb-8e51-4f05-b931-070a5a0612af\") " Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.032527 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12c1f1eb-8e51-4f05-b931-070a5a0612af" (UID: "12c1f1eb-8e51-4f05-b931-070a5a0612af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.036251 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx" (OuterVolumeSpecName: "kube-api-access-7s9gx") pod "12c1f1eb-8e51-4f05-b931-070a5a0612af" (UID: "12c1f1eb-8e51-4f05-b931-070a5a0612af"). InnerVolumeSpecName "kube-api-access-7s9gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.078530 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.078788 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" containerID="cri-o://5dd8945c9fed1a5fef0dcfc0c944193448bc995faaf5e716b23e8dee7b71128b" gracePeriod=600 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.079191 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="thanos-sidecar" containerID="cri-o://1ac9c5e88215432ae3f672dcbe3ce135c29b1ad1ba1fdd3d7185b08c23ac44be" gracePeriod=600 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.079237 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="config-reloader" containerID="cri-o://9b833e15ee94a91431b1f7cd984e8f8d1f794fc25fe0aa85a270e7ba875700da" gracePeriod=600 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.112751 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.132307 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12c1f1eb-8e51-4f05-b931-070a5a0612af-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.132340 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s9gx\" (UniqueName: \"kubernetes.io/projected/12c1f1eb-8e51-4f05-b931-070a5a0612af-kube-api-access-7s9gx\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.473435 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft-config-thv2s" event={"ID":"612ccf37-b028-408b-9775-aca576dea633","Type":"ContainerStarted","Data":"6de6202f5b0ab30ac8647ef1d1bb24fdd7af4dd99fa43095f095db6b1682ecd2"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.473493 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft-config-thv2s" event={"ID":"612ccf37-b028-408b-9775-aca576dea633","Type":"ContainerStarted","Data":"2fbb4fa1fb98736261025f38bb81b8b406179df5ff5b9631c6eb375a458c584a"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480252 4847 generic.go:334] "Generic (PLEG): container finished" podID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerID="1ac9c5e88215432ae3f672dcbe3ce135c29b1ad1ba1fdd3d7185b08c23ac44be" exitCode=0 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480632 4847 generic.go:334] "Generic (PLEG): container finished" podID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerID="9b833e15ee94a91431b1f7cd984e8f8d1f794fc25fe0aa85a270e7ba875700da" exitCode=0 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480646 4847 generic.go:334] "Generic (PLEG): container finished" podID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerID="5dd8945c9fed1a5fef0dcfc0c944193448bc995faaf5e716b23e8dee7b71128b" exitCode=0 Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480705 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerDied","Data":"1ac9c5e88215432ae3f672dcbe3ce135c29b1ad1ba1fdd3d7185b08c23ac44be"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480737 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerDied","Data":"9b833e15ee94a91431b1f7cd984e8f8d1f794fc25fe0aa85a270e7ba875700da"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.480748 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerDied","Data":"5dd8945c9fed1a5fef0dcfc0c944193448bc995faaf5e716b23e8dee7b71128b"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.487976 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8hbdg" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.487956 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8hbdg" event={"ID":"12c1f1eb-8e51-4f05-b931-070a5a0612af","Type":"ContainerDied","Data":"dda2ad1e3c283bec779a35236decb920e40ade0111fe41551702d9af13ea6fce"} Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.488046 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dda2ad1e3c283bec779a35236decb920e40ade0111fe41551702d9af13ea6fce" Feb 18 00:47:31 crc kubenswrapper[4847]: I0218 00:47:31.501134 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xh6ft-config-thv2s" podStartSLOduration=2.501113743 podStartE2EDuration="2.501113743s" podCreationTimestamp="2026-02-18 00:47:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:31.496084814 +0000 UTC m=+1324.873435756" watchObservedRunningTime="2026-02-18 00:47:31.501113743 +0000 UTC m=+1324.878464675" Feb 18 00:47:32 crc kubenswrapper[4847]: I0218 00:47:32.500648 4847 generic.go:334] "Generic (PLEG): container finished" podID="612ccf37-b028-408b-9775-aca576dea633" containerID="6de6202f5b0ab30ac8647ef1d1bb24fdd7af4dd99fa43095f095db6b1682ecd2" exitCode=0 Feb 18 00:47:32 crc kubenswrapper[4847]: I0218 00:47:32.500695 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft-config-thv2s" event={"ID":"612ccf37-b028-408b-9775-aca576dea633","Type":"ContainerDied","Data":"6de6202f5b0ab30ac8647ef1d1bb24fdd7af4dd99fa43095f095db6b1682ecd2"} Feb 18 00:47:34 crc kubenswrapper[4847]: I0218 00:47:34.528803 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xh6ft" Feb 18 00:47:35 crc kubenswrapper[4847]: I0218 00:47:35.631791 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.127:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:47:36 crc kubenswrapper[4847]: I0218 00:47:36.043839 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8hbdg"] Feb 18 00:47:36 crc kubenswrapper[4847]: I0218 00:47:36.050758 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8hbdg"] Feb 18 00:47:37 crc kubenswrapper[4847]: I0218 00:47:37.419825 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12c1f1eb-8e51-4f05-b931-070a5a0612af" path="/var/lib/kubelet/pods/12c1f1eb-8e51-4f05-b931-070a5a0612af/volumes" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.747497 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833488 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833542 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833594 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2742b\" (UniqueName: \"kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833683 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833752 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.833792 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts\") pod \"612ccf37-b028-408b-9775-aca576dea633\" (UID: \"612ccf37-b028-408b-9775-aca576dea633\") " Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.834300 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.837073 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts" (OuterVolumeSpecName: "scripts") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.837195 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run" (OuterVolumeSpecName: "var-run") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.837255 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.837282 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.838190 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.845912 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b" (OuterVolumeSpecName: "kube-api-access-2742b") pod "612ccf37-b028-408b-9775-aca576dea633" (UID: "612ccf37-b028-408b-9775-aca576dea633"). InnerVolumeSpecName "kube-api-access-2742b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.848472 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/623045fa-a3f1-4ad5-a5f7-361f31303bfb-etc-swift\") pod \"swift-storage-0\" (UID: \"623045fa-a3f1-4ad5-a5f7-361f31303bfb\") " pod="openstack/swift-storage-0" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.875758 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.937747 4847 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.938075 4847 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.938087 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2742b\" (UniqueName: \"kubernetes.io/projected/612ccf37-b028-408b-9775-aca576dea633-kube-api-access-2742b\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.938100 4847 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/612ccf37-b028-408b-9775-aca576dea633-var-run\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.938108 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:39 crc kubenswrapper[4847]: I0218 00:47:39.938117 4847 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/612ccf37-b028-408b-9775-aca576dea633-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041281 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041356 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041406 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041431 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041449 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b97sh\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041491 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041509 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041572 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041634 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.041662 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1\") pod \"1989970b-d11c-44b8-b0b7-011c8e842c1f\" (UID: \"1989970b-d11c-44b8-b0b7-011c8e842c1f\") " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.042338 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.042473 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.058307 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.059770 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.059779 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh" (OuterVolumeSpecName: "kube-api-access-b97sh") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "kube-api-access-b97sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.060499 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config" (OuterVolumeSpecName: "config") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.060840 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out" (OuterVolumeSpecName: "config-out") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.065447 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.065549 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.083802 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config" (OuterVolumeSpecName: "web-config") pod "1989970b-d11c-44b8-b0b7-011c8e842c1f" (UID: "1989970b-d11c-44b8-b0b7-011c8e842c1f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.144865 4847 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1989970b-d11c-44b8-b0b7-011c8e842c1f-config-out\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.144937 4847 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.144972 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.144991 4847 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145004 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b97sh\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-kube-api-access-b97sh\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145014 4847 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145023 4847 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-web-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145033 4847 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1989970b-d11c-44b8-b0b7-011c8e842c1f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145041 4847 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1989970b-d11c-44b8-b0b7-011c8e842c1f-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.145050 4847 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1989970b-d11c-44b8-b0b7-011c8e842c1f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.149030 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.176550 4847 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.246246 4847 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.590338 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xh6ft-config-thv2s" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.590328 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xh6ft-config-thv2s" event={"ID":"612ccf37-b028-408b-9775-aca576dea633","Type":"ContainerDied","Data":"2fbb4fa1fb98736261025f38bb81b8b406179df5ff5b9631c6eb375a458c584a"} Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.590815 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbb4fa1fb98736261025f38bb81b8b406179df5ff5b9631c6eb375a458c584a" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.595617 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qldtf" event={"ID":"60c4f757-8241-4268-92af-da05a6e0217e","Type":"ContainerStarted","Data":"a25fd2e270eee2583d85f4e223f799766a2f4cf6ee32004726bba00921f310d2"} Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.598129 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"1989970b-d11c-44b8-b0b7-011c8e842c1f","Type":"ContainerDied","Data":"9e0ce529684c3a98f2625b3523d9295a1f3140b0889e0b08ecb6327e9f1cf4c1"} Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.598226 4847 scope.go:117] "RemoveContainer" containerID="1ac9c5e88215432ae3f672dcbe3ce135c29b1ad1ba1fdd3d7185b08c23ac44be" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.598154 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.614203 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qldtf" podStartSLOduration=4.937560953 podStartE2EDuration="21.61418246s" podCreationTimestamp="2026-02-18 00:47:19 +0000 UTC" firstStartedPulling="2026-02-18 00:47:23.062052824 +0000 UTC m=+1316.439403756" lastFinishedPulling="2026-02-18 00:47:39.738674301 +0000 UTC m=+1333.116025263" observedRunningTime="2026-02-18 00:47:40.613218827 +0000 UTC m=+1333.990569799" watchObservedRunningTime="2026-02-18 00:47:40.61418246 +0000 UTC m=+1333.991533432" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.633974 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.127:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.634005 4847 scope.go:117] "RemoveContainer" containerID="9b833e15ee94a91431b1f7cd984e8f8d1f794fc25fe0aa85a270e7ba875700da" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.715535 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.717976 4847 scope.go:117] "RemoveContainer" containerID="5dd8945c9fed1a5fef0dcfc0c944193448bc995faaf5e716b23e8dee7b71128b" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.728130 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.737544 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.757785 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758241 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758255 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758267 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c1f1eb-8e51-4f05-b931-070a5a0612af" containerName="mariadb-account-create-update" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758274 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c1f1eb-8e51-4f05-b931-070a5a0612af" containerName="mariadb-account-create-update" Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758287 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="thanos-sidecar" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758293 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="thanos-sidecar" Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758306 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="config-reloader" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758312 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="config-reloader" Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758326 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="612ccf37-b028-408b-9775-aca576dea633" containerName="ovn-config" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758331 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="612ccf37-b028-408b-9775-aca576dea633" containerName="ovn-config" Feb 18 00:47:40 crc kubenswrapper[4847]: E0218 00:47:40.758347 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="init-config-reloader" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758353 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="init-config-reloader" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758523 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="config-reloader" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758538 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="612ccf37-b028-408b-9775-aca576dea633" containerName="ovn-config" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758549 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="prometheus" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758565 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c1f1eb-8e51-4f05-b931-070a5a0612af" containerName="mariadb-account-create-update" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.758578 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" containerName="thanos-sidecar" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.760248 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.765073 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.765347 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.765686 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.765838 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.766029 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gh5vq" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.766117 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.766214 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.766314 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.766479 4847 scope.go:117] "RemoveContainer" containerID="bbe804413d16311bc73e463a320aae7e1af7fcec38d9771f74f50bb56dd17c1f" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.777033 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.797634 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.858496 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862091 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862176 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862201 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862229 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862259 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862297 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862418 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862487 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862517 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswfq\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-kube-api-access-tswfq\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862569 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862658 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862707 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.862804 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.918670 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xh6ft-config-thv2s"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.925617 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xh6ft-config-thv2s"] Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.965912 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.965972 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966014 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966052 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966099 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966131 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966154 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tswfq\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-kube-api-access-tswfq\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966187 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966223 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966252 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966306 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966339 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.966390 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.971949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.973362 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.981055 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.981457 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.982793 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.983181 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.983352 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.984145 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.985870 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:40 crc kubenswrapper[4847]: I0218 00:47:40.989314 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:40.993911 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:40.996021 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.011483 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tswfq\" (UniqueName: \"kubernetes.io/projected/f622e85f-b79e-4abb-aa5d-bb51ca59d1ae-kube-api-access-tswfq\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.013978 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"prometheus-metric-storage-0\" (UID: \"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae\") " pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.092713 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5p2lc"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.093859 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.103765 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.120657 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5p2lc"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.136432 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.170541 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjqg\" (UniqueName: \"kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.170631 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.272496 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjqg\" (UniqueName: \"kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.272904 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.273551 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.282862 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-hjjdx"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.284090 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.297821 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjqg\" (UniqueName: \"kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg\") pod \"root-account-create-update-5p2lc\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.311255 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-521e-account-create-update-crkzh"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.312796 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.323887 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.375127 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.375259 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mlh\" (UniqueName: \"kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.375499 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtfp\" (UniqueName: \"kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.375660 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hjjdx"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.375672 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.418934 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.424050 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1989970b-d11c-44b8-b0b7-011c8e842c1f" path="/var/lib/kubelet/pods/1989970b-d11c-44b8-b0b7-011c8e842c1f/volumes" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.424884 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="612ccf37-b028-408b-9775-aca576dea633" path="/var/lib/kubelet/pods/612ccf37-b028-408b-9775-aca576dea633/volumes" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.425652 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-521e-account-create-update-crkzh"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.443044 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6fa8-account-create-update-8g6f8"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.444461 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.453328 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.453477 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xg9hr"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.465112 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.466479 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6fa8-account-create-update-8g6f8"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478533 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478559 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6752\" (UniqueName: \"kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478582 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mlh\" (UniqueName: \"kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478622 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dtfp\" (UniqueName: \"kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478667 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xg9hr"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478713 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.478778 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.482273 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.489182 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.503694 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-j4gvn"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.505518 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.505899 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mlh\" (UniqueName: \"kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh\") pod \"heat-521e-account-create-update-crkzh\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.508872 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5phwk" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.509118 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.511591 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.511802 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dtfp\" (UniqueName: \"kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp\") pod \"heat-db-create-hjjdx\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.514296 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.531662 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j4gvn"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.579809 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5slm\" (UniqueName: \"kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580093 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580136 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8l2g\" (UniqueName: \"kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580278 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580327 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580346 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6752\" (UniqueName: \"kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.580519 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.581268 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.612659 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6752\" (UniqueName: \"kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752\") pod \"cinder-6fa8-account-create-update-8g6f8\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.625486 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.633302 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"5e8c4dcc0c6ba42f9e2d57ab20d7efcb821f7465f8655aa083a3fd58c7cbcc36"} Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.650805 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.686843 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.686952 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5slm\" (UniqueName: \"kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.687007 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.687051 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8l2g\" (UniqueName: \"kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.687085 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.688787 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.693198 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.702540 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.713697 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-d6zdc"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.715320 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.723276 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8l2g\" (UniqueName: \"kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g\") pod \"keystone-db-sync-j4gvn\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.732931 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5slm\" (UniqueName: \"kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm\") pod \"cinder-db-create-xg9hr\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.747745 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d6zdc"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.784504 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-061e-account-create-update-khvj9"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.785858 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.788488 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmj49\" (UniqueName: \"kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.788592 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.795131 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.804548 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-061e-account-create-update-khvj9"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.808995 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.846968 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.879098 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.885330 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rfvrp"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.886680 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.890630 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhp6\" (UniqueName: \"kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.890676 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmj49\" (UniqueName: \"kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.890757 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.890780 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.891785 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.908679 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rfvrp"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.927927 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmj49\" (UniqueName: \"kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49\") pod \"neutron-db-create-d6zdc\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.995788 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.997623 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvkf6\" (UniqueName: \"kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.997660 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.997683 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.997769 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmhp6\" (UniqueName: \"kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:41 crc kubenswrapper[4847]: I0218 00:47:41.999382 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.019842 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmhp6\" (UniqueName: \"kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6\") pod \"barbican-061e-account-create-update-khvj9\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.089956 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4238-account-create-update-jdjnk"] Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.094129 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.109543 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.114318 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvkf6\" (UniqueName: \"kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.114406 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.138411 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.145526 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.169295 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.187685 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4238-account-create-update-jdjnk"] Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.216710 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvkf6\" (UniqueName: \"kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6\") pod \"barbican-db-create-rfvrp\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.218590 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.232018 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnztf\" (UniqueName: \"kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.303098 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.325307 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5p2lc"] Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.334141 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.334211 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnztf\" (UniqueName: \"kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.335095 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.379649 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnztf\" (UniqueName: \"kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf\") pod \"neutron-4238-account-create-update-jdjnk\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.445025 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-hjjdx"] Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.494051 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.660645 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5p2lc" event={"ID":"a42b82b8-55dd-48ab-86ea-0c50c940c8f8","Type":"ContainerStarted","Data":"e52e40ec33d19cea2bc1eace1e9a7864f8c15a17b27d2f9490037e5284b53c90"} Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.661987 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hjjdx" event={"ID":"66b4e4a8-e7ee-4541-92de-2a0fe41f879b","Type":"ContainerStarted","Data":"29b9f06192ff469c2f3d616d143bdd860e870f9a992f0da91b11b812f69ee4c4"} Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.663335 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerStarted","Data":"11e1a943142def6e6f9cc3b1a713ad20eb4085654bd6daee09672606a151963b"} Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.848092 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6fa8-account-create-update-8g6f8"] Feb 18 00:47:42 crc kubenswrapper[4847]: I0218 00:47:42.978026 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j4gvn"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.305805 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-521e-account-create-update-crkzh"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.324738 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xg9hr"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.349387 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-061e-account-create-update-khvj9"] Feb 18 00:47:43 crc kubenswrapper[4847]: W0218 00:47:43.350753 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod457ab17e_eca3_40eb_9116_fb82cbbcc65f.slice/crio-7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b WatchSource:0}: Error finding container 7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b: Status 404 returned error can't find the container with id 7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b Feb 18 00:47:43 crc kubenswrapper[4847]: W0218 00:47:43.369272 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb181ca27_a468_4527_a748_5cf4ac36fdb6.slice/crio-b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db WatchSource:0}: Error finding container b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db: Status 404 returned error can't find the container with id b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.373055 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-d6zdc"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.400526 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rfvrp"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.524130 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4238-account-create-update-jdjnk"] Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.672875 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d6zdc" event={"ID":"b181ca27-a468-4527-a748-5cf4ac36fdb6","Type":"ContainerStarted","Data":"b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.674231 4847 generic.go:334] "Generic (PLEG): container finished" podID="0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" containerID="b2f49f1f7519947baee7fc615ba3fc4702cee4e29361a070f206a71ef9d82eb6" exitCode=0 Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.674276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6fa8-account-create-update-8g6f8" event={"ID":"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3","Type":"ContainerDied","Data":"b2f49f1f7519947baee7fc615ba3fc4702cee4e29361a070f206a71ef9d82eb6"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.674293 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6fa8-account-create-update-8g6f8" event={"ID":"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3","Type":"ContainerStarted","Data":"c276beadf4a602173eef27d6d34f792ff9322d02fd568b927e037c32e177e777"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.676394 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xg9hr" event={"ID":"dec96268-406b-4a40-8825-9e3f0938d457","Type":"ContainerStarted","Data":"f86257c291a5ca285fa8c0ba6e461514e42965bab4274791335685db8cd7a44e"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.677620 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rfvrp" event={"ID":"b17f9edc-950a-4930-a9e8-cb5accfebfd0","Type":"ContainerStarted","Data":"ed08bb9066733ff2cd93088eb1ff30ac6be63590144aaa98193aa2f7bb00ff70"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.678869 4847 generic.go:334] "Generic (PLEG): container finished" podID="66b4e4a8-e7ee-4541-92de-2a0fe41f879b" containerID="d6f883501b59c3e4144f5c290125a27a0f6fbaaac7d5530377a4015b555eef77" exitCode=0 Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.678919 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hjjdx" event={"ID":"66b4e4a8-e7ee-4541-92de-2a0fe41f879b","Type":"ContainerDied","Data":"d6f883501b59c3e4144f5c290125a27a0f6fbaaac7d5530377a4015b555eef77"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.680069 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-521e-account-create-update-crkzh" event={"ID":"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc","Type":"ContainerStarted","Data":"c442670d2278a329db7981ac0c8f5175b6dc3f1632d64c0d4b3abbb2c415fa2a"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.681045 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4gvn" event={"ID":"191319b2-ff52-494a-8ba9-a7402cc0dda7","Type":"ContainerStarted","Data":"de51ce5eafe25fa16435c502c202d8143bdcb954737deb8d13c6eb5c96933a23"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.682287 4847 generic.go:334] "Generic (PLEG): container finished" podID="a42b82b8-55dd-48ab-86ea-0c50c940c8f8" containerID="811b0c367e9c7a7af816d1f79a67443f9720d4729e096b1f1acd97e121fdeb5a" exitCode=0 Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.682324 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5p2lc" event={"ID":"a42b82b8-55dd-48ab-86ea-0c50c940c8f8","Type":"ContainerDied","Data":"811b0c367e9c7a7af816d1f79a67443f9720d4729e096b1f1acd97e121fdeb5a"} Feb 18 00:47:43 crc kubenswrapper[4847]: I0218 00:47:43.683423 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-061e-account-create-update-khvj9" event={"ID":"457ab17e-eca3-40eb-9116-fb82cbbcc65f","Type":"ContainerStarted","Data":"7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b"} Feb 18 00:47:44 crc kubenswrapper[4847]: W0218 00:47:44.213805 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ae3c38_4d5f_4601_a112_4b11ec4324b2.slice/crio-069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a WatchSource:0}: Error finding container 069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a: Status 404 returned error can't find the container with id 069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.700724 4847 generic.go:334] "Generic (PLEG): container finished" podID="457ab17e-eca3-40eb-9116-fb82cbbcc65f" containerID="0ac8d7dcda0f6c26811b48165e3e0ce82324aafe16f868ad396207a942d24ffe" exitCode=0 Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.701081 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-061e-account-create-update-khvj9" event={"ID":"457ab17e-eca3-40eb-9116-fb82cbbcc65f","Type":"ContainerDied","Data":"0ac8d7dcda0f6c26811b48165e3e0ce82324aafe16f868ad396207a942d24ffe"} Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.709517 4847 generic.go:334] "Generic (PLEG): container finished" podID="dec96268-406b-4a40-8825-9e3f0938d457" containerID="3a8f6bbbf7f5aad6401cf8f265f089e07966cc21bc68f74bf3ffadef365eb04f" exitCode=0 Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.709590 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xg9hr" event={"ID":"dec96268-406b-4a40-8825-9e3f0938d457","Type":"ContainerDied","Data":"3a8f6bbbf7f5aad6401cf8f265f089e07966cc21bc68f74bf3ffadef365eb04f"} Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.720616 4847 generic.go:334] "Generic (PLEG): container finished" podID="0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" containerID="936574ae9481f5f0ae4568202bdfe99d73e60a7ad57f5e720681c4ef93b4d915" exitCode=0 Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.720702 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-521e-account-create-update-crkzh" event={"ID":"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc","Type":"ContainerDied","Data":"936574ae9481f5f0ae4568202bdfe99d73e60a7ad57f5e720681c4ef93b4d915"} Feb 18 00:47:44 crc kubenswrapper[4847]: I0218 00:47:44.733960 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4238-account-create-update-jdjnk" event={"ID":"28ae3c38-4d5f-4601-a112-4b11ec4324b2","Type":"ContainerStarted","Data":"069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.395677 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.396288 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.404267 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.436099 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mjqg\" (UniqueName: \"kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg\") pod \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.436181 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts\") pod \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.436246 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts\") pod \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\" (UID: \"a42b82b8-55dd-48ab-86ea-0c50c940c8f8\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.436302 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6752\" (UniqueName: \"kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752\") pod \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\" (UID: \"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.438301 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" (UID: "0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.438338 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a42b82b8-55dd-48ab-86ea-0c50c940c8f8" (UID: "a42b82b8-55dd-48ab-86ea-0c50c940c8f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.455910 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg" (OuterVolumeSpecName: "kube-api-access-5mjqg") pod "a42b82b8-55dd-48ab-86ea-0c50c940c8f8" (UID: "a42b82b8-55dd-48ab-86ea-0c50c940c8f8"). InnerVolumeSpecName "kube-api-access-5mjqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.456089 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752" (OuterVolumeSpecName: "kube-api-access-s6752") pod "0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" (UID: "0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3"). InnerVolumeSpecName "kube-api-access-s6752". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.537731 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts\") pod \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.538084 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dtfp\" (UniqueName: \"kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp\") pod \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\" (UID: \"66b4e4a8-e7ee-4541-92de-2a0fe41f879b\") " Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.538743 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6752\" (UniqueName: \"kubernetes.io/projected/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-kube-api-access-s6752\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.538755 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mjqg\" (UniqueName: \"kubernetes.io/projected/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-kube-api-access-5mjqg\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.538765 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.538774 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a42b82b8-55dd-48ab-86ea-0c50c940c8f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.540040 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66b4e4a8-e7ee-4541-92de-2a0fe41f879b" (UID: "66b4e4a8-e7ee-4541-92de-2a0fe41f879b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.542300 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp" (OuterVolumeSpecName: "kube-api-access-5dtfp") pod "66b4e4a8-e7ee-4541-92de-2a0fe41f879b" (UID: "66b4e4a8-e7ee-4541-92de-2a0fe41f879b"). InnerVolumeSpecName "kube-api-access-5dtfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.640870 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.640906 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dtfp\" (UniqueName: \"kubernetes.io/projected/66b4e4a8-e7ee-4541-92de-2a0fe41f879b-kube-api-access-5dtfp\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.752194 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-hjjdx" event={"ID":"66b4e4a8-e7ee-4541-92de-2a0fe41f879b","Type":"ContainerDied","Data":"29b9f06192ff469c2f3d616d143bdd860e870f9a992f0da91b11b812f69ee4c4"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.752249 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29b9f06192ff469c2f3d616d143bdd860e870f9a992f0da91b11b812f69ee4c4" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.752220 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-hjjdx" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.764520 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerStarted","Data":"43bc12d10e182cbc3e8ee19bb3795332acd70ead14f42aa322c7a6e6220e5d88"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.767915 4847 generic.go:334] "Generic (PLEG): container finished" podID="b181ca27-a468-4527-a748-5cf4ac36fdb6" containerID="0650f42516b95dea8ce9c207fbb0b7d69ecfc556bf7d17df21d835faa5393835" exitCode=0 Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.767992 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d6zdc" event={"ID":"b181ca27-a468-4527-a748-5cf4ac36fdb6","Type":"ContainerDied","Data":"0650f42516b95dea8ce9c207fbb0b7d69ecfc556bf7d17df21d835faa5393835"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.770336 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6fa8-account-create-update-8g6f8" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.770335 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6fa8-account-create-update-8g6f8" event={"ID":"0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3","Type":"ContainerDied","Data":"c276beadf4a602173eef27d6d34f792ff9322d02fd568b927e037c32e177e777"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.770371 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c276beadf4a602173eef27d6d34f792ff9322d02fd568b927e037c32e177e777" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.773130 4847 generic.go:334] "Generic (PLEG): container finished" podID="28ae3c38-4d5f-4601-a112-4b11ec4324b2" containerID="ac111aa6c0138e32c622c988694353ec1f13a86e2550d627ceaebb4c4ddfad61" exitCode=0 Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.773181 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4238-account-create-update-jdjnk" event={"ID":"28ae3c38-4d5f-4601-a112-4b11ec4324b2","Type":"ContainerDied","Data":"ac111aa6c0138e32c622c988694353ec1f13a86e2550d627ceaebb4c4ddfad61"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.774579 4847 generic.go:334] "Generic (PLEG): container finished" podID="b17f9edc-950a-4930-a9e8-cb5accfebfd0" containerID="c20fb1f015a90907f1e0323b53be169d1863d01e313db1e48e5cdeb650cb04fb" exitCode=0 Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.774631 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rfvrp" event={"ID":"b17f9edc-950a-4930-a9e8-cb5accfebfd0","Type":"ContainerDied","Data":"c20fb1f015a90907f1e0323b53be169d1863d01e313db1e48e5cdeb650cb04fb"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.775671 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5p2lc" event={"ID":"a42b82b8-55dd-48ab-86ea-0c50c940c8f8","Type":"ContainerDied","Data":"e52e40ec33d19cea2bc1eace1e9a7864f8c15a17b27d2f9490037e5284b53c90"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.775705 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e52e40ec33d19cea2bc1eace1e9a7864f8c15a17b27d2f9490037e5284b53c90" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.775755 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5p2lc" Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.777054 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"000cdd0d37d045f29f7f12856816e32d6480a5cd11a1abcbe4a2eb734b855c72"} Feb 18 00:47:45 crc kubenswrapper[4847]: I0218 00:47:45.777130 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"c7af22756208bc599285041c40d182748d98380d979a1be6b51e30f44e28a123"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.259866 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.362852 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.364553 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts\") pod \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.364594 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmhp6\" (UniqueName: \"kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6\") pod \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\" (UID: \"457ab17e-eca3-40eb-9116-fb82cbbcc65f\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.365941 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "457ab17e-eca3-40eb-9116-fb82cbbcc65f" (UID: "457ab17e-eca3-40eb-9116-fb82cbbcc65f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.369041 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.370933 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6" (OuterVolumeSpecName: "kube-api-access-gmhp6") pod "457ab17e-eca3-40eb-9116-fb82cbbcc65f" (UID: "457ab17e-eca3-40eb-9116-fb82cbbcc65f"). InnerVolumeSpecName "kube-api-access-gmhp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.465982 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts\") pod \"dec96268-406b-4a40-8825-9e3f0938d457\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.466083 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5slm\" (UniqueName: \"kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm\") pod \"dec96268-406b-4a40-8825-9e3f0938d457\" (UID: \"dec96268-406b-4a40-8825-9e3f0938d457\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.466130 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45mlh\" (UniqueName: \"kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh\") pod \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.466225 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts\") pod \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\" (UID: \"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc\") " Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.466642 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmhp6\" (UniqueName: \"kubernetes.io/projected/457ab17e-eca3-40eb-9116-fb82cbbcc65f-kube-api-access-gmhp6\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.466663 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/457ab17e-eca3-40eb-9116-fb82cbbcc65f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.467323 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dec96268-406b-4a40-8825-9e3f0938d457" (UID: "dec96268-406b-4a40-8825-9e3f0938d457"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.467827 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" (UID: "0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.475900 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm" (OuterVolumeSpecName: "kube-api-access-f5slm") pod "dec96268-406b-4a40-8825-9e3f0938d457" (UID: "dec96268-406b-4a40-8825-9e3f0938d457"). InnerVolumeSpecName "kube-api-access-f5slm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.476202 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh" (OuterVolumeSpecName: "kube-api-access-45mlh") pod "0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" (UID: "0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc"). InnerVolumeSpecName "kube-api-access-45mlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.568670 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dec96268-406b-4a40-8825-9e3f0938d457-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.568714 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5slm\" (UniqueName: \"kubernetes.io/projected/dec96268-406b-4a40-8825-9e3f0938d457-kube-api-access-f5slm\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.568725 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45mlh\" (UniqueName: \"kubernetes.io/projected/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-kube-api-access-45mlh\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.568737 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.794553 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"d84be5c318a942754268d08d3e4708650dbd97317a08be3771c84d844291f1ae"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.795281 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"2b8f873c07fae859911d13123e9eec11d71c7dac756612f8f2a3e46aaab42d93"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.797233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-061e-account-create-update-khvj9" event={"ID":"457ab17e-eca3-40eb-9116-fb82cbbcc65f","Type":"ContainerDied","Data":"7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.797280 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b40d03d51b83a50471a0200ac498a4ab90ae61d401dc09e25184713d8f2653b" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.797369 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-061e-account-create-update-khvj9" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.802543 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xg9hr" event={"ID":"dec96268-406b-4a40-8825-9e3f0938d457","Type":"ContainerDied","Data":"f86257c291a5ca285fa8c0ba6e461514e42965bab4274791335685db8cd7a44e"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.802639 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f86257c291a5ca285fa8c0ba6e461514e42965bab4274791335685db8cd7a44e" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.802743 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xg9hr" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.811791 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-521e-account-create-update-crkzh" event={"ID":"0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc","Type":"ContainerDied","Data":"c442670d2278a329db7981ac0c8f5175b6dc3f1632d64c0d4b3abbb2c415fa2a"} Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.811868 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c442670d2278a329db7981ac0c8f5175b6dc3f1632d64c0d4b3abbb2c415fa2a" Feb 18 00:47:46 crc kubenswrapper[4847]: I0218 00:47:46.811969 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-521e-account-create-update-crkzh" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.773652 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.780162 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.785220 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866386 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts\") pod \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866451 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts\") pod \"b181ca27-a468-4527-a748-5cf4ac36fdb6\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866560 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnztf\" (UniqueName: \"kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf\") pod \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866758 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts\") pod \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\" (UID: \"28ae3c38-4d5f-4601-a112-4b11ec4324b2\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866823 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmj49\" (UniqueName: \"kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49\") pod \"b181ca27-a468-4527-a748-5cf4ac36fdb6\" (UID: \"b181ca27-a468-4527-a748-5cf4ac36fdb6\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.866849 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvkf6\" (UniqueName: \"kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6\") pod \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\" (UID: \"b17f9edc-950a-4930-a9e8-cb5accfebfd0\") " Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.867452 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b181ca27-a468-4527-a748-5cf4ac36fdb6" (UID: "b181ca27-a468-4527-a748-5cf4ac36fdb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.867755 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b181ca27-a468-4527-a748-5cf4ac36fdb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.872409 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b17f9edc-950a-4930-a9e8-cb5accfebfd0" (UID: "b17f9edc-950a-4930-a9e8-cb5accfebfd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.872549 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28ae3c38-4d5f-4601-a112-4b11ec4324b2" (UID: "28ae3c38-4d5f-4601-a112-4b11ec4324b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.873737 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf" (OuterVolumeSpecName: "kube-api-access-wnztf") pod "28ae3c38-4d5f-4601-a112-4b11ec4324b2" (UID: "28ae3c38-4d5f-4601-a112-4b11ec4324b2"). InnerVolumeSpecName "kube-api-access-wnztf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.874527 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6" (OuterVolumeSpecName: "kube-api-access-vvkf6") pod "b17f9edc-950a-4930-a9e8-cb5accfebfd0" (UID: "b17f9edc-950a-4930-a9e8-cb5accfebfd0"). InnerVolumeSpecName "kube-api-access-vvkf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.875352 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4238-account-create-update-jdjnk" event={"ID":"28ae3c38-4d5f-4601-a112-4b11ec4324b2","Type":"ContainerDied","Data":"069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a"} Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.875472 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069d3c93d61239a393a6cecabdc1f727a015eb05fdd26a25d3a6a0d1df949e6a" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.875635 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4238-account-create-update-jdjnk" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.877045 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49" (OuterVolumeSpecName: "kube-api-access-mmj49") pod "b181ca27-a468-4527-a748-5cf4ac36fdb6" (UID: "b181ca27-a468-4527-a748-5cf4ac36fdb6"). InnerVolumeSpecName "kube-api-access-mmj49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.877984 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-d6zdc" event={"ID":"b181ca27-a468-4527-a748-5cf4ac36fdb6","Type":"ContainerDied","Data":"b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db"} Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.878046 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5b8b7601e7f0d0c70c2827fe8648b16634dac9dfb4254f5696e9ec348c053db" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.878013 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-d6zdc" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.881773 4847 generic.go:334] "Generic (PLEG): container finished" podID="60c4f757-8241-4268-92af-da05a6e0217e" containerID="a25fd2e270eee2583d85f4e223f799766a2f4cf6ee32004726bba00921f310d2" exitCode=0 Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.881884 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qldtf" event={"ID":"60c4f757-8241-4268-92af-da05a6e0217e","Type":"ContainerDied","Data":"a25fd2e270eee2583d85f4e223f799766a2f4cf6ee32004726bba00921f310d2"} Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.889256 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rfvrp" event={"ID":"b17f9edc-950a-4930-a9e8-cb5accfebfd0","Type":"ContainerDied","Data":"ed08bb9066733ff2cd93088eb1ff30ac6be63590144aaa98193aa2f7bb00ff70"} Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.889285 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed08bb9066733ff2cd93088eb1ff30ac6be63590144aaa98193aa2f7bb00ff70" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.889349 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rfvrp" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.970014 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnztf\" (UniqueName: \"kubernetes.io/projected/28ae3c38-4d5f-4601-a112-4b11ec4324b2-kube-api-access-wnztf\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.970061 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28ae3c38-4d5f-4601-a112-4b11ec4324b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.970085 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmj49\" (UniqueName: \"kubernetes.io/projected/b181ca27-a468-4527-a748-5cf4ac36fdb6-kube-api-access-mmj49\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.970098 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvkf6\" (UniqueName: \"kubernetes.io/projected/b17f9edc-950a-4930-a9e8-cb5accfebfd0-kube-api-access-vvkf6\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:49 crc kubenswrapper[4847]: I0218 00:47:49.970111 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b17f9edc-950a-4930-a9e8-cb5accfebfd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:50 crc kubenswrapper[4847]: I0218 00:47:50.908228 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4gvn" event={"ID":"191319b2-ff52-494a-8ba9-a7402cc0dda7","Type":"ContainerStarted","Data":"1cdf146627b3a94206f70eefd5763c81eeb7990652bf09baea080a9fca8bbfc8"} Feb 18 00:47:50 crc kubenswrapper[4847]: I0218 00:47:50.929434 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"5d63a4a5ea1426388167d3e2702deb2402a78d25af5c963d1c8de44952cec80e"} Feb 18 00:47:50 crc kubenswrapper[4847]: I0218 00:47:50.929732 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"9e48eff7420c105bc833439d3367e1f07c309f55f891b7c97e79bc67d38971e1"} Feb 18 00:47:50 crc kubenswrapper[4847]: I0218 00:47:50.932364 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-j4gvn" podStartSLOduration=3.313577735 podStartE2EDuration="9.932342242s" podCreationTimestamp="2026-02-18 00:47:41 +0000 UTC" firstStartedPulling="2026-02-18 00:47:43.013534812 +0000 UTC m=+1336.390885754" lastFinishedPulling="2026-02-18 00:47:49.632299319 +0000 UTC m=+1343.009650261" observedRunningTime="2026-02-18 00:47:50.929934245 +0000 UTC m=+1344.307285187" watchObservedRunningTime="2026-02-18 00:47:50.932342242 +0000 UTC m=+1344.309693174" Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.939164 4847 generic.go:334] "Generic (PLEG): container finished" podID="f622e85f-b79e-4abb-aa5d-bb51ca59d1ae" containerID="43bc12d10e182cbc3e8ee19bb3795332acd70ead14f42aa322c7a6e6220e5d88" exitCode=0 Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.939291 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerDied","Data":"43bc12d10e182cbc3e8ee19bb3795332acd70ead14f42aa322c7a6e6220e5d88"} Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.942904 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qldtf" event={"ID":"60c4f757-8241-4268-92af-da05a6e0217e","Type":"ContainerDied","Data":"fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23"} Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.942949 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd07e0990e8c8ce497bff9821e280e289303b1258f57a9a9d755749956aff23" Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.962088 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"12a3e37ccd03c1b41e0e6ad7bdb831cc2fec0eb38a400fc4b14797a375bab370"} Feb 18 00:47:51 crc kubenswrapper[4847]: I0218 00:47:51.962127 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"7cd7156f205266df51d9ed0c890271146ba7505d8db7aa27cb651f6178b864a2"} Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.046732 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.152199 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld52g\" (UniqueName: \"kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g\") pod \"60c4f757-8241-4268-92af-da05a6e0217e\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.152580 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data\") pod \"60c4f757-8241-4268-92af-da05a6e0217e\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.152763 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data\") pod \"60c4f757-8241-4268-92af-da05a6e0217e\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.152827 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle\") pod \"60c4f757-8241-4268-92af-da05a6e0217e\" (UID: \"60c4f757-8241-4268-92af-da05a6e0217e\") " Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.157320 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g" (OuterVolumeSpecName: "kube-api-access-ld52g") pod "60c4f757-8241-4268-92af-da05a6e0217e" (UID: "60c4f757-8241-4268-92af-da05a6e0217e"). InnerVolumeSpecName "kube-api-access-ld52g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.159384 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "60c4f757-8241-4268-92af-da05a6e0217e" (UID: "60c4f757-8241-4268-92af-da05a6e0217e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.185861 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60c4f757-8241-4268-92af-da05a6e0217e" (UID: "60c4f757-8241-4268-92af-da05a6e0217e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.205566 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data" (OuterVolumeSpecName: "config-data") pod "60c4f757-8241-4268-92af-da05a6e0217e" (UID: "60c4f757-8241-4268-92af-da05a6e0217e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.256087 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld52g\" (UniqueName: \"kubernetes.io/projected/60c4f757-8241-4268-92af-da05a6e0217e-kube-api-access-ld52g\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.256133 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.256148 4847 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.256161 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60c4f757-8241-4268-92af-da05a6e0217e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.974920 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerStarted","Data":"32fc49d138e715a91137495d68a10f9d09438390b0b4aaf5ed9208e025a4b3f7"} Feb 18 00:47:52 crc kubenswrapper[4847]: I0218 00:47:52.974961 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qldtf" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.514075 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.514912 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42b82b8-55dd-48ab-86ea-0c50c940c8f8" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.514940 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42b82b8-55dd-48ab-86ea-0c50c940c8f8" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.514964 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec96268-406b-4a40-8825-9e3f0938d457" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.514973 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec96268-406b-4a40-8825-9e3f0938d457" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.514986 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b4e4a8-e7ee-4541-92de-2a0fe41f879b" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.514994 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b4e4a8-e7ee-4541-92de-2a0fe41f879b" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515007 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b181ca27-a468-4527-a748-5cf4ac36fdb6" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515018 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b181ca27-a468-4527-a748-5cf4ac36fdb6" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515028 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c4f757-8241-4268-92af-da05a6e0217e" containerName="glance-db-sync" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515037 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c4f757-8241-4268-92af-da05a6e0217e" containerName="glance-db-sync" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515048 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="457ab17e-eca3-40eb-9116-fb82cbbcc65f" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515055 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="457ab17e-eca3-40eb-9116-fb82cbbcc65f" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515064 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ae3c38-4d5f-4601-a112-4b11ec4324b2" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515085 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ae3c38-4d5f-4601-a112-4b11ec4324b2" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515116 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515126 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515141 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515148 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: E0218 00:47:53.515157 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f9edc-950a-4930-a9e8-cb5accfebfd0" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515164 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f9edc-950a-4930-a9e8-cb5accfebfd0" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515416 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c4f757-8241-4268-92af-da05a6e0217e" containerName="glance-db-sync" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515437 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515445 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec96268-406b-4a40-8825-9e3f0938d457" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515462 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17f9edc-950a-4930-a9e8-cb5accfebfd0" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515475 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ae3c38-4d5f-4601-a112-4b11ec4324b2" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515491 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="457ab17e-eca3-40eb-9116-fb82cbbcc65f" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515501 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b181ca27-a468-4527-a748-5cf4ac36fdb6" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515512 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515525 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b4e4a8-e7ee-4541-92de-2a0fe41f879b" containerName="mariadb-database-create" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.515542 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42b82b8-55dd-48ab-86ea-0c50c940c8f8" containerName="mariadb-account-create-update" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.517891 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.545312 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.680666 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.680732 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.680763 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.680923 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.681047 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhrd\" (UniqueName: \"kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.782874 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.782946 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.782974 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.783015 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.783045 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhrd\" (UniqueName: \"kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.784729 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.784745 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.784742 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.787547 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.800756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhrd\" (UniqueName: \"kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd\") pod \"dnsmasq-dns-74dc88fc-brfpx\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:53 crc kubenswrapper[4847]: I0218 00:47:53.851184 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:54 crc kubenswrapper[4847]: I0218 00:47:54.307878 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:47:55 crc kubenswrapper[4847]: I0218 00:47:55.000861 4847 generic.go:334] "Generic (PLEG): container finished" podID="ac9a8321-4947-4121-b648-a6656fc592f4" containerID="bdb3ed82b0cbb554d4ea6ce921bddd3e9b81e96a004d89c4b2991ea81abc3fa4" exitCode=0 Feb 18 00:47:55 crc kubenswrapper[4847]: I0218 00:47:55.000954 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" event={"ID":"ac9a8321-4947-4121-b648-a6656fc592f4","Type":"ContainerDied","Data":"bdb3ed82b0cbb554d4ea6ce921bddd3e9b81e96a004d89c4b2991ea81abc3fa4"} Feb 18 00:47:55 crc kubenswrapper[4847]: I0218 00:47:55.001387 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" event={"ID":"ac9a8321-4947-4121-b648-a6656fc592f4","Type":"ContainerStarted","Data":"96c6d7671cb3ec37af550954e2b3731595cb3ea11f6df558b48c67ee848f3445"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.015644 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"d4ce7cbc0799f135c46fe5fa2b7e774a4cc65d223c8f775e08220bae8b4f176e"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.016242 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"279a0a56a33ebd2c3ffd064600f263e877a5d6926fe08391f353d1c296d0cdf5"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.016255 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"70f26fe36c10a7aad74de065dc74976539ccb4df556d0f6cefdcada5411ac6f8"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.016265 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"19c88c9b53bd2af514394dd3e5d794c4373ed8992eefc6a171e1c038fe7f7ccd"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.017864 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" event={"ID":"ac9a8321-4947-4121-b648-a6656fc592f4","Type":"ContainerStarted","Data":"79b8d197f1ebac386fa525d4fc0fe25feb3b7042552c5c1ce55db8d309be047e"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.018096 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.034855 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerStarted","Data":"49b7fec5d71977ef9cfbccb0f95f99375da3e340c27edafe8c0f2d2d6f0a13fc"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.034904 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"f622e85f-b79e-4abb-aa5d-bb51ca59d1ae","Type":"ContainerStarted","Data":"ddb116fb0d87b252a511536d4a8979f57ab5f01f5cce3bf3d739f2942905e57a"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.039065 4847 generic.go:334] "Generic (PLEG): container finished" podID="191319b2-ff52-494a-8ba9-a7402cc0dda7" containerID="1cdf146627b3a94206f70eefd5763c81eeb7990652bf09baea080a9fca8bbfc8" exitCode=0 Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.039111 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4gvn" event={"ID":"191319b2-ff52-494a-8ba9-a7402cc0dda7","Type":"ContainerDied","Data":"1cdf146627b3a94206f70eefd5763c81eeb7990652bf09baea080a9fca8bbfc8"} Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.046050 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" podStartSLOduration=3.046033062 podStartE2EDuration="3.046033062s" podCreationTimestamp="2026-02-18 00:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:56.043068871 +0000 UTC m=+1349.420419813" watchObservedRunningTime="2026-02-18 00:47:56.046033062 +0000 UTC m=+1349.423384004" Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.089797 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.089776997 podStartE2EDuration="16.089776997s" podCreationTimestamp="2026-02-18 00:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:47:56.084196005 +0000 UTC m=+1349.461546947" watchObservedRunningTime="2026-02-18 00:47:56.089776997 +0000 UTC m=+1349.467127949" Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.149405 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.149452 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:56 crc kubenswrapper[4847]: I0218 00:47:56.156882 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.062502 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"4e701f723b129e0a3b4fe9b339819bfe719b0c6f1c34ddeb1a6acc4858a593e2"} Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.063287 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"eed4a7121a0f4894230d6f056bf897e45d3e106cc24aedaec619bd4b76a07121"} Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.063301 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"623045fa-a3f1-4ad5-a5f7-361f31303bfb","Type":"ContainerStarted","Data":"c267d626c23ed84f919ad88997799a1a60dbe91ba3a286c668c2f04961625f5e"} Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.072081 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.124249 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.871273973 podStartE2EDuration="51.124228381s" podCreationTimestamp="2026-02-18 00:47:06 +0000 UTC" firstStartedPulling="2026-02-18 00:47:40.86673615 +0000 UTC m=+1334.244087092" lastFinishedPulling="2026-02-18 00:47:55.119690568 +0000 UTC m=+1348.497041500" observedRunningTime="2026-02-18 00:47:57.111367357 +0000 UTC m=+1350.488718329" watchObservedRunningTime="2026-02-18 00:47:57.124228381 +0000 UTC m=+1350.501579333" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.520003 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.543675 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.545312 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.549255 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.581863 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671447 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671653 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671699 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671724 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671776 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.671833 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5q87\" (UniqueName: \"kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.675487 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.772697 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle\") pod \"191319b2-ff52-494a-8ba9-a7402cc0dda7\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.772863 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8l2g\" (UniqueName: \"kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g\") pod \"191319b2-ff52-494a-8ba9-a7402cc0dda7\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773003 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data\") pod \"191319b2-ff52-494a-8ba9-a7402cc0dda7\" (UID: \"191319b2-ff52-494a-8ba9-a7402cc0dda7\") " Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773274 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773351 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773379 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773508 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5q87\" (UniqueName: \"kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.773541 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.774378 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.774547 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.775019 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.775198 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.775544 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.779172 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g" (OuterVolumeSpecName: "kube-api-access-p8l2g") pod "191319b2-ff52-494a-8ba9-a7402cc0dda7" (UID: "191319b2-ff52-494a-8ba9-a7402cc0dda7"). InnerVolumeSpecName "kube-api-access-p8l2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.793255 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5q87\" (UniqueName: \"kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87\") pod \"dnsmasq-dns-5f59b8f679-75skr\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.801417 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "191319b2-ff52-494a-8ba9-a7402cc0dda7" (UID: "191319b2-ff52-494a-8ba9-a7402cc0dda7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.821563 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data" (OuterVolumeSpecName: "config-data") pod "191319b2-ff52-494a-8ba9-a7402cc0dda7" (UID: "191319b2-ff52-494a-8ba9-a7402cc0dda7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.875255 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.875286 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/191319b2-ff52-494a-8ba9-a7402cc0dda7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.875312 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8l2g\" (UniqueName: \"kubernetes.io/projected/191319b2-ff52-494a-8ba9-a7402cc0dda7-kube-api-access-p8l2g\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:57 crc kubenswrapper[4847]: I0218 00:47:57.972114 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.152963 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j4gvn" event={"ID":"191319b2-ff52-494a-8ba9-a7402cc0dda7","Type":"ContainerDied","Data":"de51ce5eafe25fa16435c502c202d8143bdcb954737deb8d13c6eb5c96933a23"} Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.153278 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de51ce5eafe25fa16435c502c202d8143bdcb954737deb8d13c6eb5c96933a23" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.153375 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j4gvn" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.153784 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="dnsmasq-dns" containerID="cri-o://79b8d197f1ebac386fa525d4fc0fe25feb3b7042552c5c1ce55db8d309be047e" gracePeriod=10 Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.262889 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.326770 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:47:58 crc kubenswrapper[4847]: E0218 00:47:58.328942 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="191319b2-ff52-494a-8ba9-a7402cc0dda7" containerName="keystone-db-sync" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.328965 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="191319b2-ff52-494a-8ba9-a7402cc0dda7" containerName="keystone-db-sync" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.329401 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="191319b2-ff52-494a-8ba9-a7402cc0dda7" containerName="keystone-db-sync" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.331186 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.368836 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396331 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396437 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdwz\" (UniqueName: \"kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396467 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396503 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396530 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.396557 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.400378 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g8444"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.401714 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.404534 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.404906 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.411421 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.411706 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5phwk" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.411773 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.443968 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g8444"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.464973 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-znxsz"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.466277 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.470204 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.470451 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dmgzf" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.489549 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-znxsz"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.497886 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.497933 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdwz\" (UniqueName: \"kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.497956 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.497976 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.498027 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.498836 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.498871 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.498927 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqlx\" (UniqueName: \"kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.498949 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.499007 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.499042 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.499107 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.501100 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.503217 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.503496 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.507275 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.507545 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.585247 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdwz\" (UniqueName: \"kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz\") pod \"dnsmasq-dns-bbf5cc879-skjr9\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623781 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623844 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqlx\" (UniqueName: \"kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623883 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623917 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623949 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.623989 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh7hb\" (UniqueName: \"kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.624012 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.624055 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.624078 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.628134 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.633570 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.637521 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.645530 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.646894 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.664737 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.669152 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqlx\" (UniqueName: \"kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx\") pod \"keystone-bootstrap-g8444\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.708833 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.723723 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8444" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.725512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.725601 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh7hb\" (UniqueName: \"kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.725643 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.745765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.745911 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.758747 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qxdsw"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.762794 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.766965 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qxdsw"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.768632 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.769667 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.773388 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kxl6j" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.793253 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh7hb\" (UniqueName: \"kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb\") pod \"heat-db-sync-znxsz\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.803844 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pd45p"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.807503 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.815259 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wt6q8" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.815311 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.829664 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pd45p"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.830826 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.830906 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4ps\" (UniqueName: \"kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.830991 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.831060 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.831202 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.831283 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.864487 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8dkg7"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.865906 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.869098 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.869308 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5cpw4" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.869552 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.879991 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.896675 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8dkg7"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937556 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937596 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4ps\" (UniqueName: \"kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937655 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937678 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937736 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937755 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937801 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937820 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gcmd\" (UniqueName: \"kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937866 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937909 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsx2q\" (UniqueName: \"kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937955 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937971 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.937993 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.938017 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.938140 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.972376 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.972753 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.988000 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-czdg6"] Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.990559 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.991012 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.993724 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r4jrh" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.993907 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 00:47:58 crc kubenswrapper[4847]: I0218 00:47:58.994265 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:58.994959 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.019569 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4ps\" (UniqueName: \"kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps\") pod \"cinder-db-sync-qxdsw\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.028671 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-czdg6"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.038416 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044267 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044352 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044403 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044426 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044460 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044481 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gcmd\" (UniqueName: \"kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044499 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044549 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044569 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsx2q\" (UniqueName: \"kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044655 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044680 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.044714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhlw8\" (UniqueName: \"kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.045297 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.050286 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.053143 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.056630 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.065321 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.073419 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsx2q\" (UniqueName: \"kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.075779 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gcmd\" (UniqueName: \"kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd\") pod \"barbican-db-sync-pd45p\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.078804 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts\") pod \"placement-db-sync-8dkg7\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.086999 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-znxsz" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.090938 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.125985 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pd45p" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.139823 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.166782 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.166896 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167245 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167338 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhlw8\" (UniqueName: \"kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167418 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167471 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167630 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167819 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.167912 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wwtm\" (UniqueName: \"kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.185385 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8dkg7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.221327 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.228964 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.233745 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.243809 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.244333 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.250638 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhlw8\" (UniqueName: \"kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8\") pod \"neutron-db-sync-czdg6\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.258832 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.262149 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.271036 4847 generic.go:334] "Generic (PLEG): container finished" podID="ac9a8321-4947-4121-b648-a6656fc592f4" containerID="79b8d197f1ebac386fa525d4fc0fe25feb3b7042552c5c1ce55db8d309be047e" exitCode=0 Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.271120 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" event={"ID":"ac9a8321-4947-4121-b648-a6656fc592f4","Type":"ContainerDied","Data":"79b8d197f1ebac386fa525d4fc0fe25feb3b7042552c5c1ce55db8d309be047e"} Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.290864 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wwtm\" (UniqueName: \"kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.296495 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.303950 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.306142 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" event={"ID":"95f5d4a1-3049-428c-b3db-aedefa3ff2ae","Type":"ContainerStarted","Data":"c8b19c1467dc781d6e8d200e9223c6a9b3a927567acf8c4e7cc03f94761b68bc"} Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.307139 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.307501 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.307680 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.307798 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.307973 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.309373 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.309683 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.310735 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.311953 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wwtm\" (UniqueName: \"kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm\") pod \"dnsmasq-dns-56df8fb6b7-dxph7\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.320939 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.409086 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb\") pod \"ac9a8321-4947-4121-b648-a6656fc592f4\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.409132 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc\") pod \"ac9a8321-4947-4121-b648-a6656fc592f4\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.414524 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config\") pod \"ac9a8321-4947-4121-b648-a6656fc592f4\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.414992 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhrd\" (UniqueName: \"kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd\") pod \"ac9a8321-4947-4121-b648-a6656fc592f4\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.415030 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb\") pod \"ac9a8321-4947-4121-b648-a6656fc592f4\" (UID: \"ac9a8321-4947-4121-b648-a6656fc592f4\") " Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.415866 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.415942 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.416047 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.416082 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhx7\" (UniqueName: \"kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.416168 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.416219 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.416293 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.429159 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd" (OuterVolumeSpecName: "kube-api-access-mxhrd") pod "ac9a8321-4947-4121-b648-a6656fc592f4" (UID: "ac9a8321-4947-4121-b648-a6656fc592f4"). InnerVolumeSpecName "kube-api-access-mxhrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.490336 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g8444"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.499096 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac9a8321-4947-4121-b648-a6656fc592f4" (UID: "ac9a8321-4947-4121-b648-a6656fc592f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.509996 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-czdg6" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.518062 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.518253 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chhx7\" (UniqueName: \"kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.520157 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.520741 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.520933 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.521108 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.521253 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.521461 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.521571 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhrd\" (UniqueName: \"kubernetes.io/projected/ac9a8321-4947-4121-b648-a6656fc592f4-kube-api-access-mxhrd\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.520676 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.522873 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.524135 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.527201 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.545471 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.548269 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chhx7\" (UniqueName: \"kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.551226 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.560424 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data\") pod \"ceilometer-0\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.594004 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac9a8321-4947-4121-b648-a6656fc592f4" (UID: "ac9a8321-4947-4121-b648-a6656fc592f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.605051 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.630421 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.652513 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config" (OuterVolumeSpecName: "config") pod "ac9a8321-4947-4121-b648-a6656fc592f4" (UID: "ac9a8321-4947-4121-b648-a6656fc592f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.673667 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac9a8321-4947-4121-b648-a6656fc592f4" (UID: "ac9a8321-4947-4121-b648-a6656fc592f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.734588 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.734639 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac9a8321-4947-4121-b648-a6656fc592f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.737927 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:47:59 crc kubenswrapper[4847]: I0218 00:47:59.759726 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-znxsz"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.213140 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8dkg7"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.269797 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pd45p"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.295642 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qxdsw"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.331699 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8444" event={"ID":"8792bde0-6a55-4830-9220-b9170374ad48","Type":"ContainerStarted","Data":"30e074840237a94349e6e93cf790a01ac09f029dbf7c13d41eb502886bd027cf"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.331748 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8444" event={"ID":"8792bde0-6a55-4830-9220-b9170374ad48","Type":"ContainerStarted","Data":"7caba77484ae375b9994a833f1eaf7d27f55a1ec8aa6d4faf9b20b30b6e6888a"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.344127 4847 generic.go:334] "Generic (PLEG): container finished" podID="95f5d4a1-3049-428c-b3db-aedefa3ff2ae" containerID="c2a72fa3df02dd581ef029d847a3fabd2a128b610e17d5d2941c73e5b152d97a" exitCode=0 Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.344211 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" event={"ID":"95f5d4a1-3049-428c-b3db-aedefa3ff2ae","Type":"ContainerDied","Data":"c2a72fa3df02dd581ef029d847a3fabd2a128b610e17d5d2941c73e5b152d97a"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.346817 4847 generic.go:334] "Generic (PLEG): container finished" podID="14908f45-54c4-4da3-867f-190a993ed4e1" containerID="962ec7c4da24e7dc6db39027c56fe9144a22d7ff25c438ad252a6faaaf2b710e" exitCode=0 Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.346898 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" event={"ID":"14908f45-54c4-4da3-867f-190a993ed4e1","Type":"ContainerDied","Data":"962ec7c4da24e7dc6db39027c56fe9144a22d7ff25c438ad252a6faaaf2b710e"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.346916 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" event={"ID":"14908f45-54c4-4da3-867f-190a993ed4e1","Type":"ContainerStarted","Data":"c82389458804d8ff459b95da334fe773264c765ac04d8acfac413dcc11a33691"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.377265 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.403598 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.404234 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-znxsz" event={"ID":"014e96ac-8dcb-4d73-a9e1-1ade26742005","Type":"ContainerStarted","Data":"d6fca2e113bde5aa3bc71762b848d25e61025ebf3e5fe0246606f6e7e8f65367"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.409418 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g8444" podStartSLOduration=2.409398227 podStartE2EDuration="2.409398227s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:00.360676913 +0000 UTC m=+1353.738027865" watchObservedRunningTime="2026-02-18 00:48:00.409398227 +0000 UTC m=+1353.786749169" Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.444275 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pd45p" event={"ID":"ca59c512-1360-4daf-9ee3-9c5c7cd143e1","Type":"ContainerStarted","Data":"09cb8ad2cd0b3773f9cb0e8e138702d9057baba5b2e10791b56e7ebe372908c0"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.445969 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8dkg7" event={"ID":"de1208a0-7171-4d36-af50-a33f03208e5d","Type":"ContainerStarted","Data":"39399715a96c10c240f8f6ed0a0380c21960578e4f031dde986504085f8c8c29"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.458925 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" event={"ID":"ac9a8321-4947-4121-b648-a6656fc592f4","Type":"ContainerDied","Data":"96c6d7671cb3ec37af550954e2b3731595cb3ea11f6df558b48c67ee848f3445"} Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.458981 4847 scope.go:117] "RemoveContainer" containerID="79b8d197f1ebac386fa525d4fc0fe25feb3b7042552c5c1ce55db8d309be047e" Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.459050 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-brfpx" Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.493953 4847 scope.go:117] "RemoveContainer" containerID="bdb3ed82b0cbb554d4ea6ce921bddd3e9b81e96a004d89c4b2991ea81abc3fa4" Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.528693 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-czdg6"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.594982 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.606679 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-brfpx"] Feb 18 00:48:00 crc kubenswrapper[4847]: I0218 00:48:00.965765 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:48:01 crc kubenswrapper[4847]: E0218 00:48:01.019320 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61cc2fbc_ba97_4934_888b_a52b7329727d.slice/crio-conmon-5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066313 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066414 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqdwz\" (UniqueName: \"kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066466 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066509 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066580 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.066700 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc\") pod \"14908f45-54c4-4da3-867f-190a993ed4e1\" (UID: \"14908f45-54c4-4da3-867f-190a993ed4e1\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.104296 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz" (OuterVolumeSpecName: "kube-api-access-rqdwz") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "kube-api-access-rqdwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.106752 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.148043 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config" (OuterVolumeSpecName: "config") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.172104 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqdwz\" (UniqueName: \"kubernetes.io/projected/14908f45-54c4-4da3-867f-190a993ed4e1-kube-api-access-rqdwz\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.172132 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.172141 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.215213 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.248054 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.248948 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.274803 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.275525 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.281103 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "14908f45-54c4-4da3-867f-190a993ed4e1" (UID: "14908f45-54c4-4da3-867f-190a993ed4e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378380 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378550 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378619 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5q87\" (UniqueName: \"kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378708 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378736 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.378756 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb\") pod \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\" (UID: \"95f5d4a1-3049-428c-b3db-aedefa3ff2ae\") " Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.379147 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/14908f45-54c4-4da3-867f-190a993ed4e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.385958 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87" (OuterVolumeSpecName: "kube-api-access-k5q87") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "kube-api-access-k5q87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.411777 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config" (OuterVolumeSpecName: "config") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.415302 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.417011 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.418311 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.437034 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" path="/var/lib/kubelet/pods/ac9a8321-4947-4121-b648-a6656fc592f4/volumes" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.448909 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95f5d4a1-3049-428c-b3db-aedefa3ff2ae" (UID: "95f5d4a1-3049-428c-b3db-aedefa3ff2ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.473577 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerStarted","Data":"dbab888e590aa294fd77c7932937cbffe2815550f7457b0d0f55ac95edfc1d6c"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.475439 4847 generic.go:334] "Generic (PLEG): container finished" podID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerID="5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28" exitCode=0 Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.475478 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" event={"ID":"61cc2fbc-ba97-4934-888b-a52b7329727d","Type":"ContainerDied","Data":"5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.475496 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" event={"ID":"61cc2fbc-ba97-4934-888b-a52b7329727d","Type":"ContainerStarted","Data":"fd009e03d1f99ae75b685e339fbf9898783ac9e063739dfff7f5db06c58073d0"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484326 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484440 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5q87\" (UniqueName: \"kubernetes.io/projected/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-kube-api-access-k5q87\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484509 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484792 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484874 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484939 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95f5d4a1-3049-428c-b3db-aedefa3ff2ae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484485 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" event={"ID":"95f5d4a1-3049-428c-b3db-aedefa3ff2ae","Type":"ContainerDied","Data":"c8b19c1467dc781d6e8d200e9223c6a9b3a927567acf8c4e7cc03f94761b68bc"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.485085 4847 scope.go:117] "RemoveContainer" containerID="c2a72fa3df02dd581ef029d847a3fabd2a128b610e17d5d2941c73e5b152d97a" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.484448 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-75skr" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.512777 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" event={"ID":"14908f45-54c4-4da3-867f-190a993ed4e1","Type":"ContainerDied","Data":"c82389458804d8ff459b95da334fe773264c765ac04d8acfac413dcc11a33691"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.512882 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-skjr9" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.520309 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxdsw" event={"ID":"e40815e0-c0e4-4265-94f8-c9c7b262a011","Type":"ContainerStarted","Data":"f97906abe9d3fb6041ad055e8c7aed037198bba5e9a2a805ca267b749fc0b43d"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.535119 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-czdg6" event={"ID":"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0","Type":"ContainerStarted","Data":"c116e5094ea3264493552ea528dae9d7e9f0ae637beb77a1de03399ef2398e62"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.535161 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-czdg6" event={"ID":"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0","Type":"ContainerStarted","Data":"99d21ac6bdd6f0fe3fc5b1c3fbf549873bf459818fe6e63644fa17610449ec91"} Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.594804 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.610020 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-75skr"] Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.612031 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-czdg6" podStartSLOduration=3.612011072 podStartE2EDuration="3.612011072s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:01.578635242 +0000 UTC m=+1354.955986184" watchObservedRunningTime="2026-02-18 00:48:01.612011072 +0000 UTC m=+1354.989362014" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.659227 4847 scope.go:117] "RemoveContainer" containerID="962ec7c4da24e7dc6db39027c56fe9144a22d7ff25c438ad252a6faaaf2b710e" Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.663554 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.676376 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:01 crc kubenswrapper[4847]: I0218 00:48:01.688438 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-skjr9"] Feb 18 00:48:02 crc kubenswrapper[4847]: I0218 00:48:02.602704 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" event={"ID":"61cc2fbc-ba97-4934-888b-a52b7329727d","Type":"ContainerStarted","Data":"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8"} Feb 18 00:48:02 crc kubenswrapper[4847]: I0218 00:48:02.603819 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:48:02 crc kubenswrapper[4847]: I0218 00:48:02.628043 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" podStartSLOduration=4.628028338 podStartE2EDuration="4.628028338s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:02.623006259 +0000 UTC m=+1356.000357191" watchObservedRunningTime="2026-02-18 00:48:02.628028338 +0000 UTC m=+1356.005379280" Feb 18 00:48:03 crc kubenswrapper[4847]: I0218 00:48:03.416703 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14908f45-54c4-4da3-867f-190a993ed4e1" path="/var/lib/kubelet/pods/14908f45-54c4-4da3-867f-190a993ed4e1/volumes" Feb 18 00:48:03 crc kubenswrapper[4847]: I0218 00:48:03.417526 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95f5d4a1-3049-428c-b3db-aedefa3ff2ae" path="/var/lib/kubelet/pods/95f5d4a1-3049-428c-b3db-aedefa3ff2ae/volumes" Feb 18 00:48:04 crc kubenswrapper[4847]: I0218 00:48:04.639516 4847 generic.go:334] "Generic (PLEG): container finished" podID="8792bde0-6a55-4830-9220-b9170374ad48" containerID="30e074840237a94349e6e93cf790a01ac09f029dbf7c13d41eb502886bd027cf" exitCode=0 Feb 18 00:48:04 crc kubenswrapper[4847]: I0218 00:48:04.639761 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8444" event={"ID":"8792bde0-6a55-4830-9220-b9170374ad48","Type":"ContainerDied","Data":"30e074840237a94349e6e93cf790a01ac09f029dbf7c13d41eb502886bd027cf"} Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.181949 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8444" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298186 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vqlx\" (UniqueName: \"kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298272 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298421 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298453 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298517 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.298617 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys\") pod \"8792bde0-6a55-4830-9220-b9170374ad48\" (UID: \"8792bde0-6a55-4830-9220-b9170374ad48\") " Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.306131 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.306687 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts" (OuterVolumeSpecName: "scripts") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.309813 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx" (OuterVolumeSpecName: "kube-api-access-4vqlx") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "kube-api-access-4vqlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.318251 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.342005 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.349418 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data" (OuterVolumeSpecName: "config-data") pod "8792bde0-6a55-4830-9220-b9170374ad48" (UID: "8792bde0-6a55-4830-9220-b9170374ad48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.400375 4847 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.400822 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vqlx\" (UniqueName: \"kubernetes.io/projected/8792bde0-6a55-4830-9220-b9170374ad48-kube-api-access-4vqlx\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.400905 4847 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.400967 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.401019 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.401074 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8792bde0-6a55-4830-9220-b9170374ad48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.677758 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g8444" event={"ID":"8792bde0-6a55-4830-9220-b9170374ad48","Type":"ContainerDied","Data":"7caba77484ae375b9994a833f1eaf7d27f55a1ec8aa6d4faf9b20b30b6e6888a"} Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.677818 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7caba77484ae375b9994a833f1eaf7d27f55a1ec8aa6d4faf9b20b30b6e6888a" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.677843 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g8444" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.747305 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g8444"] Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.767737 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g8444"] Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831157 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g2d77"] Feb 18 00:48:06 crc kubenswrapper[4847]: E0218 00:48:06.831738 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8792bde0-6a55-4830-9220-b9170374ad48" containerName="keystone-bootstrap" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831752 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8792bde0-6a55-4830-9220-b9170374ad48" containerName="keystone-bootstrap" Feb 18 00:48:06 crc kubenswrapper[4847]: E0218 00:48:06.831771 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831778 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: E0218 00:48:06.831795 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14908f45-54c4-4da3-867f-190a993ed4e1" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831802 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="14908f45-54c4-4da3-867f-190a993ed4e1" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: E0218 00:48:06.831817 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95f5d4a1-3049-428c-b3db-aedefa3ff2ae" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831822 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="95f5d4a1-3049-428c-b3db-aedefa3ff2ae" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: E0218 00:48:06.831834 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="dnsmasq-dns" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.831840 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="dnsmasq-dns" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.833689 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9a8321-4947-4121-b648-a6656fc592f4" containerName="dnsmasq-dns" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.833722 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="8792bde0-6a55-4830-9220-b9170374ad48" containerName="keystone-bootstrap" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.833738 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="14908f45-54c4-4da3-867f-190a993ed4e1" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.833752 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="95f5d4a1-3049-428c-b3db-aedefa3ff2ae" containerName="init" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.834416 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.836382 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.838340 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5phwk" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.838473 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.838582 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.840088 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.844294 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g2d77"] Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.918687 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.919092 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.919308 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.919347 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9dgf\" (UniqueName: \"kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.919373 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:06 crc kubenswrapper[4847]: I0218 00:48:06.919633 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022195 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022308 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022332 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022429 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022473 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9dgf\" (UniqueName: \"kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.022496 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.029357 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.030143 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.030376 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.031212 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.038042 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.040697 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9dgf\" (UniqueName: \"kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf\") pod \"keystone-bootstrap-g2d77\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.157017 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:07 crc kubenswrapper[4847]: I0218 00:48:07.424927 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8792bde0-6a55-4830-9220-b9170374ad48" path="/var/lib/kubelet/pods/8792bde0-6a55-4830-9220-b9170374ad48/volumes" Feb 18 00:48:09 crc kubenswrapper[4847]: I0218 00:48:09.547856 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:48:09 crc kubenswrapper[4847]: I0218 00:48:09.620389 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:48:09 crc kubenswrapper[4847]: I0218 00:48:09.620785 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" containerID="cri-o://9cd439d3918d3de44fefad060cafc1243cef2a67a0f638ddf23cdb6db8425907" gracePeriod=10 Feb 18 00:48:10 crc kubenswrapper[4847]: I0218 00:48:10.718799 4847 generic.go:334] "Generic (PLEG): container finished" podID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerID="9cd439d3918d3de44fefad060cafc1243cef2a67a0f638ddf23cdb6db8425907" exitCode=0 Feb 18 00:48:10 crc kubenswrapper[4847]: I0218 00:48:10.719684 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" event={"ID":"aa5356b9-df2c-412d-ac6d-4039afc1286b","Type":"ContainerDied","Data":"9cd439d3918d3de44fefad060cafc1243cef2a67a0f638ddf23cdb6db8425907"} Feb 18 00:48:11 crc kubenswrapper[4847]: I0218 00:48:11.905337 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Feb 18 00:48:14 crc kubenswrapper[4847]: E0218 00:48:14.907427 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 18 00:48:14 crc kubenswrapper[4847]: E0218 00:48:14.908010 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gcmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-pd45p_openstack(ca59c512-1360-4daf-9ee3-9c5c7cd143e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:48:14 crc kubenswrapper[4847]: E0218 00:48:14.909179 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-pd45p" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" Feb 18 00:48:15 crc kubenswrapper[4847]: I0218 00:48:15.771002 4847 generic.go:334] "Generic (PLEG): container finished" podID="d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" containerID="c116e5094ea3264493552ea528dae9d7e9f0ae637beb77a1de03399ef2398e62" exitCode=0 Feb 18 00:48:15 crc kubenswrapper[4847]: I0218 00:48:15.771107 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-czdg6" event={"ID":"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0","Type":"ContainerDied","Data":"c116e5094ea3264493552ea528dae9d7e9f0ae637beb77a1de03399ef2398e62"} Feb 18 00:48:15 crc kubenswrapper[4847]: E0218 00:48:15.773454 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-pd45p" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" Feb 18 00:48:21 crc kubenswrapper[4847]: I0218 00:48:21.906284 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 18 00:48:25 crc kubenswrapper[4847]: E0218 00:48:25.657050 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Feb 18 00:48:25 crc kubenswrapper[4847]: E0218 00:48:25.659595 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh7hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-znxsz_openstack(014e96ac-8dcb-4d73-a9e1-1ade26742005): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:48:25 crc kubenswrapper[4847]: E0218 00:48:25.661287 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-znxsz" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.791157 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.794716 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-czdg6" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.854044 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc\") pod \"aa5356b9-df2c-412d-ac6d-4039afc1286b\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.854116 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb\") pod \"aa5356b9-df2c-412d-ac6d-4039afc1286b\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.854292 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn49w\" (UniqueName: \"kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w\") pod \"aa5356b9-df2c-412d-ac6d-4039afc1286b\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.854392 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config\") pod \"aa5356b9-df2c-412d-ac6d-4039afc1286b\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.854476 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb\") pod \"aa5356b9-df2c-412d-ac6d-4039afc1286b\" (UID: \"aa5356b9-df2c-412d-ac6d-4039afc1286b\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.881395 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w" (OuterVolumeSpecName: "kube-api-access-jn49w") pod "aa5356b9-df2c-412d-ac6d-4039afc1286b" (UID: "aa5356b9-df2c-412d-ac6d-4039afc1286b"). InnerVolumeSpecName "kube-api-access-jn49w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.921135 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa5356b9-df2c-412d-ac6d-4039afc1286b" (UID: "aa5356b9-df2c-412d-ac6d-4039afc1286b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.922085 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config" (OuterVolumeSpecName: "config") pod "aa5356b9-df2c-412d-ac6d-4039afc1286b" (UID: "aa5356b9-df2c-412d-ac6d-4039afc1286b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.931927 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa5356b9-df2c-412d-ac6d-4039afc1286b" (UID: "aa5356b9-df2c-412d-ac6d-4039afc1286b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.932118 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" event={"ID":"aa5356b9-df2c-412d-ac6d-4039afc1286b","Type":"ContainerDied","Data":"c0d2b7b06316a82dd2a36e19e816e88317fe9ea47599c7e7fe0406a458aafd49"} Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.932307 4847 scope.go:117] "RemoveContainer" containerID="9cd439d3918d3de44fefad060cafc1243cef2a67a0f638ddf23cdb6db8425907" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.932233 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.935339 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa5356b9-df2c-412d-ac6d-4039afc1286b" (UID: "aa5356b9-df2c-412d-ac6d-4039afc1286b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.935969 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-czdg6" event={"ID":"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0","Type":"ContainerDied","Data":"99d21ac6bdd6f0fe3fc5b1c3fbf549873bf459818fe6e63644fa17610449ec91"} Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.936021 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d21ac6bdd6f0fe3fc5b1c3fbf549873bf459818fe6e63644fa17610449ec91" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.936400 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-czdg6" Feb 18 00:48:25 crc kubenswrapper[4847]: E0218 00:48:25.942308 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-znxsz" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959190 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config\") pod \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959353 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhlw8\" (UniqueName: \"kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8\") pod \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959514 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle\") pod \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\" (UID: \"d60eb1ff-80a3-47ca-b223-3aa7c7a310c0\") " Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959925 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn49w\" (UniqueName: \"kubernetes.io/projected/aa5356b9-df2c-412d-ac6d-4039afc1286b-kube-api-access-jn49w\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959944 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959955 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959963 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.959972 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa5356b9-df2c-412d-ac6d-4039afc1286b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.963859 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8" (OuterVolumeSpecName: "kube-api-access-dhlw8") pod "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" (UID: "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0"). InnerVolumeSpecName "kube-api-access-dhlw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.986369 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config" (OuterVolumeSpecName: "config") pod "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" (UID: "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:25 crc kubenswrapper[4847]: I0218 00:48:25.987094 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" (UID: "d60eb1ff-80a3-47ca-b223-3aa7c7a310c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.061892 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.061953 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.061963 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhlw8\" (UniqueName: \"kubernetes.io/projected/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0-kube-api-access-dhlw8\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.274335 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.282514 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-mlss6"] Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.907042 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-mlss6" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: i/o timeout" Feb 18 00:48:26 crc kubenswrapper[4847]: I0218 00:48:26.952711 4847 scope.go:117] "RemoveContainer" containerID="7c3054a32e13a3bfd577f6bf0b196211289c855d971f59577f8d7ab705caf8fe" Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.029592 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.030214 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xf4ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qxdsw_openstack(e40815e0-c0e4-4265-94f8-c9c7b262a011): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.031662 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qxdsw" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.051374 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.051837 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="init" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.051854 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="init" Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.051872 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.051879 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.051896 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" containerName="neutron-db-sync" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.051902 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" containerName="neutron-db-sync" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.052080 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" containerName="neutron-db-sync" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.052098 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" containerName="dnsmasq-dns" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.053111 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.080662 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.172164 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.173972 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.180544 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r4jrh" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.180951 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.181396 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.181405 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.186184 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.192842 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pkgf\" (UniqueName: \"kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.192905 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.192973 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.192992 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.193043 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.193067 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.294752 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pkgf\" (UniqueName: \"kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295051 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295081 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295101 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295153 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdksx\" (UniqueName: \"kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295174 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295190 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295221 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295265 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295292 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.295312 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.297040 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.297508 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.298638 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.298669 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.299118 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.325957 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pkgf\" (UniqueName: \"kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf\") pod \"dnsmasq-dns-6b7b667979-l29m5\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.396709 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.396804 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.396918 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.396943 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.396991 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdksx\" (UniqueName: \"kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.409581 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.410007 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.410182 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.410295 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.414904 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdksx\" (UniqueName: \"kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.418096 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa5356b9-df2c-412d-ac6d-4039afc1286b" path="/var/lib/kubelet/pods/aa5356b9-df2c-412d-ac6d-4039afc1286b/volumes" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.421006 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.423739 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.425099 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config\") pod \"neutron-5857d66f7d-gqg2m\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.536052 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g2d77"] Feb 18 00:48:27 crc kubenswrapper[4847]: W0218 00:48:27.538185 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5436dc2_1c05_46b4_9b91_c70bee8c4126.slice/crio-4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc WatchSource:0}: Error finding container 4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc: Status 404 returned error can't find the container with id 4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.543814 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.580206 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.610735 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-r4jrh" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.619285 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.970619 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8dkg7" event={"ID":"de1208a0-7171-4d36-af50-a33f03208e5d","Type":"ContainerStarted","Data":"a8ffb10c1ce865f1d14823461d9725971932b4b00864c9caee4d76a9ab16d82e"} Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.975486 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2d77" event={"ID":"c5436dc2-1c05-46b4-9b91-c70bee8c4126","Type":"ContainerStarted","Data":"886a6d9bf7552aa12650386aee94fa74ce31831d72939ed8aeddb060474a946d"} Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.975523 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2d77" event={"ID":"c5436dc2-1c05-46b4-9b91-c70bee8c4126","Type":"ContainerStarted","Data":"4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc"} Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.978105 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerStarted","Data":"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12"} Feb 18 00:48:27 crc kubenswrapper[4847]: E0218 00:48:27.978863 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qxdsw" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" Feb 18 00:48:27 crc kubenswrapper[4847]: I0218 00:48:27.999873 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8dkg7" podStartSLOduration=3.28795339 podStartE2EDuration="29.999855207s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="2026-02-18 00:48:00.22498543 +0000 UTC m=+1353.602336372" lastFinishedPulling="2026-02-18 00:48:26.936887207 +0000 UTC m=+1380.314238189" observedRunningTime="2026-02-18 00:48:27.985937188 +0000 UTC m=+1381.363288130" watchObservedRunningTime="2026-02-18 00:48:27.999855207 +0000 UTC m=+1381.377206149" Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.050169 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g2d77" podStartSLOduration=22.050145105 podStartE2EDuration="22.050145105s" podCreationTimestamp="2026-02-18 00:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:28.007085198 +0000 UTC m=+1381.384436140" watchObservedRunningTime="2026-02-18 00:48:28.050145105 +0000 UTC m=+1381.427496047" Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.115543 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.268244 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.997633 4847 generic.go:334] "Generic (PLEG): container finished" podID="3aface83-1656-4958-b676-04bd0f99b9ac" containerID="8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821" exitCode=0 Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.997779 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" event={"ID":"3aface83-1656-4958-b676-04bd0f99b9ac","Type":"ContainerDied","Data":"8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821"} Feb 18 00:48:28 crc kubenswrapper[4847]: I0218 00:48:28.998301 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" event={"ID":"3aface83-1656-4958-b676-04bd0f99b9ac","Type":"ContainerStarted","Data":"45e0a5ca26d24996a4a6e156e2fb58f2698f896f0f8079dc7981ba6357f2d3e7"} Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.004840 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerStarted","Data":"4f49aba9c883fc0dffc7b09f488580c619196525f4257b14613b0e8caa3ab209"} Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.004891 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerStarted","Data":"643a355cc4288a509bc6c4144ab495e8828a61cf1fe162f22092013c465f4281"} Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.004902 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerStarted","Data":"9fb00cbbed76cc8954c89c2ecc5d5760b24f2f4dc25a935ba12eba44fb52342d"} Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.097417 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5857d66f7d-gqg2m" podStartSLOduration=2.097388232 podStartE2EDuration="2.097388232s" podCreationTimestamp="2026-02-18 00:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:29.059662817 +0000 UTC m=+1382.437013779" watchObservedRunningTime="2026-02-18 00:48:29.097388232 +0000 UTC m=+1382.474739174" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.621172 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.623016 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.625041 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.627590 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.633742 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.747973 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748063 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s77wj\" (UniqueName: \"kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748111 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748136 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748332 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748377 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.748514 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850455 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850503 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850538 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850661 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850725 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s77wj\" (UniqueName: \"kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850887 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.850911 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.856236 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.856984 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.857414 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.858027 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.858775 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.861504 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.876389 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s77wj\" (UniqueName: \"kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj\") pod \"neutron-67dc676569-x5csl\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:29 crc kubenswrapper[4847]: I0218 00:48:29.939044 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:30 crc kubenswrapper[4847]: I0218 00:48:30.023717 4847 generic.go:334] "Generic (PLEG): container finished" podID="de1208a0-7171-4d36-af50-a33f03208e5d" containerID="a8ffb10c1ce865f1d14823461d9725971932b4b00864c9caee4d76a9ab16d82e" exitCode=0 Feb 18 00:48:30 crc kubenswrapper[4847]: I0218 00:48:30.024033 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8dkg7" event={"ID":"de1208a0-7171-4d36-af50-a33f03208e5d","Type":"ContainerDied","Data":"a8ffb10c1ce865f1d14823461d9725971932b4b00864c9caee4d76a9ab16d82e"} Feb 18 00:48:30 crc kubenswrapper[4847]: I0218 00:48:30.024518 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:30 crc kubenswrapper[4847]: I0218 00:48:30.485281 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:48:30 crc kubenswrapper[4847]: W0218 00:48:30.486058 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87700b8a_be8c_46fe_a7a6_ec022ab8c87c.slice/crio-5daf6102b87fb83ffc96f822875e82eaee9c45408add7713d3374d9680d391b6 WatchSource:0}: Error finding container 5daf6102b87fb83ffc96f822875e82eaee9c45408add7713d3374d9680d391b6: Status 404 returned error can't find the container with id 5daf6102b87fb83ffc96f822875e82eaee9c45408add7713d3374d9680d391b6 Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.041308 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerStarted","Data":"98c3847fc17c2dcdb878f91215424724bd9eeef26f6a6c0ba24055626060a239"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.041691 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerStarted","Data":"cbb7fc726aaf64f862df797728b8e837eb204434fc78233e5b929f74f594f2f4"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.041711 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerStarted","Data":"5daf6102b87fb83ffc96f822875e82eaee9c45408add7713d3374d9680d391b6"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.042926 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.052173 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pd45p" event={"ID":"ca59c512-1360-4daf-9ee3-9c5c7cd143e1","Type":"ContainerStarted","Data":"3e5125c28a0fae1cd119f373fc2cf7de8b5b95c52c0e789c1c1ead030b8e196f"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.057669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" event={"ID":"3aface83-1656-4958-b676-04bd0f99b9ac","Type":"ContainerStarted","Data":"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.058195 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.060197 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerStarted","Data":"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291"} Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.095699 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pd45p" podStartSLOduration=3.5222738 podStartE2EDuration="33.095677976s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="2026-02-18 00:48:00.316122428 +0000 UTC m=+1353.693473370" lastFinishedPulling="2026-02-18 00:48:29.889526604 +0000 UTC m=+1383.266877546" observedRunningTime="2026-02-18 00:48:31.088809354 +0000 UTC m=+1384.466160306" watchObservedRunningTime="2026-02-18 00:48:31.095677976 +0000 UTC m=+1384.473028918" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.102128 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67dc676569-x5csl" podStartSLOduration=2.1021026369999998 podStartE2EDuration="2.102102637s" podCreationTimestamp="2026-02-18 00:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:31.068503706 +0000 UTC m=+1384.445854648" watchObservedRunningTime="2026-02-18 00:48:31.102102637 +0000 UTC m=+1384.479453579" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.118958 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" podStartSLOduration=5.118934238 podStartE2EDuration="5.118934238s" podCreationTimestamp="2026-02-18 00:48:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:31.108746493 +0000 UTC m=+1384.486097435" watchObservedRunningTime="2026-02-18 00:48:31.118934238 +0000 UTC m=+1384.496285180" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.503101 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8dkg7" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.619709 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsx2q\" (UniqueName: \"kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q\") pod \"de1208a0-7171-4d36-af50-a33f03208e5d\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.619831 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs\") pod \"de1208a0-7171-4d36-af50-a33f03208e5d\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.619912 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts\") pod \"de1208a0-7171-4d36-af50-a33f03208e5d\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.620001 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data\") pod \"de1208a0-7171-4d36-af50-a33f03208e5d\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.620024 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle\") pod \"de1208a0-7171-4d36-af50-a33f03208e5d\" (UID: \"de1208a0-7171-4d36-af50-a33f03208e5d\") " Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.620227 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs" (OuterVolumeSpecName: "logs") pod "de1208a0-7171-4d36-af50-a33f03208e5d" (UID: "de1208a0-7171-4d36-af50-a33f03208e5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.621084 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de1208a0-7171-4d36-af50-a33f03208e5d-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.625325 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts" (OuterVolumeSpecName: "scripts") pod "de1208a0-7171-4d36-af50-a33f03208e5d" (UID: "de1208a0-7171-4d36-af50-a33f03208e5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.639693 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q" (OuterVolumeSpecName: "kube-api-access-qsx2q") pod "de1208a0-7171-4d36-af50-a33f03208e5d" (UID: "de1208a0-7171-4d36-af50-a33f03208e5d"). InnerVolumeSpecName "kube-api-access-qsx2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.665352 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data" (OuterVolumeSpecName: "config-data") pod "de1208a0-7171-4d36-af50-a33f03208e5d" (UID: "de1208a0-7171-4d36-af50-a33f03208e5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.666219 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de1208a0-7171-4d36-af50-a33f03208e5d" (UID: "de1208a0-7171-4d36-af50-a33f03208e5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.722953 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.723024 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.723041 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsx2q\" (UniqueName: \"kubernetes.io/projected/de1208a0-7171-4d36-af50-a33f03208e5d-kube-api-access-qsx2q\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:31 crc kubenswrapper[4847]: I0218 00:48:31.723064 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de1208a0-7171-4d36-af50-a33f03208e5d-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.103034 4847 generic.go:334] "Generic (PLEG): container finished" podID="c5436dc2-1c05-46b4-9b91-c70bee8c4126" containerID="886a6d9bf7552aa12650386aee94fa74ce31831d72939ed8aeddb060474a946d" exitCode=0 Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.103186 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2d77" event={"ID":"c5436dc2-1c05-46b4-9b91-c70bee8c4126","Type":"ContainerDied","Data":"886a6d9bf7552aa12650386aee94fa74ce31831d72939ed8aeddb060474a946d"} Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.107083 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8dkg7" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.109747 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8dkg7" event={"ID":"de1208a0-7171-4d36-af50-a33f03208e5d","Type":"ContainerDied","Data":"39399715a96c10c240f8f6ed0a0380c21960578e4f031dde986504085f8c8c29"} Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.109791 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39399715a96c10c240f8f6ed0a0380c21960578e4f031dde986504085f8c8c29" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.165891 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:48:32 crc kubenswrapper[4847]: E0218 00:48:32.166425 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1208a0-7171-4d36-af50-a33f03208e5d" containerName="placement-db-sync" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.166440 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1208a0-7171-4d36-af50-a33f03208e5d" containerName="placement-db-sync" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.166733 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1208a0-7171-4d36-af50-a33f03208e5d" containerName="placement-db-sync" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.168008 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.174513 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.174557 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.174784 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.174892 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-5cpw4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.175241 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.209199 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.335586 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.335738 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.335786 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.336101 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.336165 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznr7\" (UniqueName: \"kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.336390 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.336578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438470 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438565 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznr7\" (UniqueName: \"kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438701 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438791 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438863 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438910 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438925 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.438958 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.443779 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.444230 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.446901 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.446993 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.456018 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznr7\" (UniqueName: \"kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.459355 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs\") pod \"placement-556dbf5b5b-fmjz4\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:32 crc kubenswrapper[4847]: I0218 00:48:32.492138 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.145571 4847 generic.go:334] "Generic (PLEG): container finished" podID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" containerID="3e5125c28a0fae1cd119f373fc2cf7de8b5b95c52c0e789c1c1ead030b8e196f" exitCode=0 Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.145639 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pd45p" event={"ID":"ca59c512-1360-4daf-9ee3-9c5c7cd143e1","Type":"ContainerDied","Data":"3e5125c28a0fae1cd119f373fc2cf7de8b5b95c52c0e789c1c1ead030b8e196f"} Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.540533 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.682909 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9dgf\" (UniqueName: \"kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.683049 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.683151 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.683193 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.683256 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.683276 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data\") pod \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\" (UID: \"c5436dc2-1c05-46b4-9b91-c70bee8c4126\") " Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.694934 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.695250 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts" (OuterVolumeSpecName: "scripts") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.698750 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.708798 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf" (OuterVolumeSpecName: "kube-api-access-l9dgf") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "kube-api-access-l9dgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.728116 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.744809 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data" (OuterVolumeSpecName: "config-data") pod "c5436dc2-1c05-46b4-9b91-c70bee8c4126" (UID: "c5436dc2-1c05-46b4-9b91-c70bee8c4126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785806 4847 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785865 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785886 4847 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785897 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785919 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9dgf\" (UniqueName: \"kubernetes.io/projected/c5436dc2-1c05-46b4-9b91-c70bee8c4126-kube-api-access-l9dgf\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:34 crc kubenswrapper[4847]: I0218 00:48:34.785940 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5436dc2-1c05-46b4-9b91-c70bee8c4126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.181134 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2d77" event={"ID":"c5436dc2-1c05-46b4-9b91-c70bee8c4126","Type":"ContainerDied","Data":"4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc"} Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.181470 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c6b25bc9cb00c0b04ab126b49d5e331139604d6052dc9396b4423d20927fbbc" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.181157 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2d77" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.692803 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-747f4858ff-m9tz2"] Feb 18 00:48:35 crc kubenswrapper[4847]: E0218 00:48:35.693294 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5436dc2-1c05-46b4-9b91-c70bee8c4126" containerName="keystone-bootstrap" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.693311 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5436dc2-1c05-46b4-9b91-c70bee8c4126" containerName="keystone-bootstrap" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.693484 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5436dc2-1c05-46b4-9b91-c70bee8c4126" containerName="keystone-bootstrap" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.694197 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.697366 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.697550 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.697639 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.698051 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.698209 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-5phwk" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.700044 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.703452 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-747f4858ff-m9tz2"] Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806299 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-public-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806376 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-combined-ca-bundle\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806394 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drcss\" (UniqueName: \"kubernetes.io/projected/5950d31e-b5dd-43e7-accb-570faedeb30a-kube-api-access-drcss\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806414 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-scripts\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806500 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-config-data\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806631 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-fernet-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806667 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-credential-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.806699 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-internal-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909128 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-public-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909531 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-combined-ca-bundle\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909558 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drcss\" (UniqueName: \"kubernetes.io/projected/5950d31e-b5dd-43e7-accb-570faedeb30a-kube-api-access-drcss\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909576 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-scripts\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909659 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-config-data\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909708 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-fernet-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909748 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-credential-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.909795 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-internal-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.914765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-internal-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.915037 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-fernet-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.920531 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-config-data\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.923019 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-scripts\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.926330 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-public-tls-certs\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.935250 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-combined-ca-bundle\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.937649 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5950d31e-b5dd-43e7-accb-570faedeb30a-credential-keys\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:35 crc kubenswrapper[4847]: I0218 00:48:35.938334 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drcss\" (UniqueName: \"kubernetes.io/projected/5950d31e-b5dd-43e7-accb-570faedeb30a-kube-api-access-drcss\") pod \"keystone-747f4858ff-m9tz2\" (UID: \"5950d31e-b5dd-43e7-accb-570faedeb30a\") " pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:36 crc kubenswrapper[4847]: I0218 00:48:36.023676 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.225826 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pd45p" event={"ID":"ca59c512-1360-4daf-9ee3-9c5c7cd143e1","Type":"ContainerDied","Data":"09cb8ad2cd0b3773f9cb0e8e138702d9057baba5b2e10791b56e7ebe372908c0"} Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.226238 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09cb8ad2cd0b3773f9cb0e8e138702d9057baba5b2e10791b56e7ebe372908c0" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.275059 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pd45p" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.342248 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle\") pod \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.342339 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gcmd\" (UniqueName: \"kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd\") pod \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.342459 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data\") pod \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\" (UID: \"ca59c512-1360-4daf-9ee3-9c5c7cd143e1\") " Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.354331 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ca59c512-1360-4daf-9ee3-9c5c7cd143e1" (UID: "ca59c512-1360-4daf-9ee3-9c5c7cd143e1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.354584 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd" (OuterVolumeSpecName: "kube-api-access-4gcmd") pod "ca59c512-1360-4daf-9ee3-9c5c7cd143e1" (UID: "ca59c512-1360-4daf-9ee3-9c5c7cd143e1"). InnerVolumeSpecName "kube-api-access-4gcmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.375241 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca59c512-1360-4daf-9ee3-9c5c7cd143e1" (UID: "ca59c512-1360-4daf-9ee3-9c5c7cd143e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.444818 4847 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.444846 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.444855 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gcmd\" (UniqueName: \"kubernetes.io/projected/ca59c512-1360-4daf-9ee3-9c5c7cd143e1-kube-api-access-4gcmd\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.581758 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.629171 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-747f4858ff-m9tz2"] Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.671196 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.671443 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="dnsmasq-dns" containerID="cri-o://69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8" gracePeriod=10 Feb 18 00:48:37 crc kubenswrapper[4847]: I0218 00:48:37.755617 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.238657 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerStarted","Data":"69456edca1a4d92be728d83efe1bc2e0767d48bce249cbe098d4830d884ffe42"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.239043 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerStarted","Data":"6d59e58c79fb104820576f24a0b9b9995e49e202a92ee24487e55477fc033ebb"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.242198 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-747f4858ff-m9tz2" event={"ID":"5950d31e-b5dd-43e7-accb-570faedeb30a","Type":"ContainerStarted","Data":"1b334fe85ab4047d8d7b01ca8dd7b4fe9bfb6e0cf43fb385e7be7f09adc119a3"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.242229 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-747f4858ff-m9tz2" event={"ID":"5950d31e-b5dd-43e7-accb-570faedeb30a","Type":"ContainerStarted","Data":"552fc063b7d0905c0fc909b60ba3d92eb9ed8ea623521757aa0bf0f9d7375e22"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.242841 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.264531 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.271836 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerStarted","Data":"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.276768 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-747f4858ff-m9tz2" podStartSLOduration=3.276756914 podStartE2EDuration="3.276756914s" podCreationTimestamp="2026-02-18 00:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:38.276106798 +0000 UTC m=+1391.653457740" watchObservedRunningTime="2026-02-18 00:48:38.276756914 +0000 UTC m=+1391.654107856" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.278139 4847 generic.go:334] "Generic (PLEG): container finished" podID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerID="69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8" exitCode=0 Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.278216 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pd45p" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.279006 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.279307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" event={"ID":"61cc2fbc-ba97-4934-888b-a52b7329727d","Type":"ContainerDied","Data":"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.279398 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-dxph7" event={"ID":"61cc2fbc-ba97-4934-888b-a52b7329727d","Type":"ContainerDied","Data":"fd009e03d1f99ae75b685e339fbf9898783ac9e063739dfff7f5db06c58073d0"} Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.279474 4847 scope.go:117] "RemoveContainer" containerID="69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.308703 4847 scope.go:117] "RemoveContainer" containerID="5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.366523 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.366891 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wwtm\" (UniqueName: \"kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.367031 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.367058 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.367122 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.367186 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0\") pod \"61cc2fbc-ba97-4934-888b-a52b7329727d\" (UID: \"61cc2fbc-ba97-4934-888b-a52b7329727d\") " Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.409987 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm" (OuterVolumeSpecName: "kube-api-access-2wwtm") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "kube-api-access-2wwtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.484149 4847 scope.go:117] "RemoveContainer" containerID="69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.486457 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wwtm\" (UniqueName: \"kubernetes.io/projected/61cc2fbc-ba97-4934-888b-a52b7329727d-kube-api-access-2wwtm\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: E0218 00:48:38.493130 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8\": container with ID starting with 69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8 not found: ID does not exist" containerID="69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.493180 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8"} err="failed to get container status \"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8\": rpc error: code = NotFound desc = could not find container \"69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8\": container with ID starting with 69f6ac37d2b7dba1207960455e9854a00bcd0fadefa66513bdc8b1776c8e16b8 not found: ID does not exist" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.493216 4847 scope.go:117] "RemoveContainer" containerID="5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28" Feb 18 00:48:38 crc kubenswrapper[4847]: E0218 00:48:38.497241 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28\": container with ID starting with 5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28 not found: ID does not exist" containerID="5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.497287 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28"} err="failed to get container status \"5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28\": rpc error: code = NotFound desc = could not find container \"5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28\": container with ID starting with 5eff6c224383356bf3bbfb5592e49ea1d884c64d51769216473768a350e3cf28 not found: ID does not exist" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.514240 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.570393 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config" (OuterVolumeSpecName: "config") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.583268 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.590733 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.591006 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.591076 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.599068 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.637096 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61cc2fbc-ba97-4934-888b-a52b7329727d" (UID: "61cc2fbc-ba97-4934-888b-a52b7329727d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.663739 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-ddd9775f-8wm5n"] Feb 18 00:48:38 crc kubenswrapper[4847]: E0218 00:48:38.664165 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="dnsmasq-dns" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.664184 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="dnsmasq-dns" Feb 18 00:48:38 crc kubenswrapper[4847]: E0218 00:48:38.664205 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="init" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.664213 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="init" Feb 18 00:48:38 crc kubenswrapper[4847]: E0218 00:48:38.664240 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" containerName="barbican-db-sync" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.664247 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" containerName="barbican-db-sync" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.664406 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" containerName="dnsmasq-dns" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.664422 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" containerName="barbican-db-sync" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.665394 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.675324 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.676498 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ddd9775f-8wm5n"] Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.676937 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wt6q8" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.683422 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.711285 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data-custom\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.711342 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03f8a3-0db4-45c7-90d9-6911a23b39c9-logs\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.711370 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9fp\" (UniqueName: \"kubernetes.io/projected/be03f8a3-0db4-45c7-90d9-6911a23b39c9-kube-api-access-gx9fp\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.711393 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.712436 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-combined-ca-bundle\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.712496 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.712507 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61cc2fbc-ba97-4934-888b-a52b7329727d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.759020 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-64755b45d-nv688"] Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.760814 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.766285 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.773192 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64755b45d-nv688"] Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.814486 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-combined-ca-bundle\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.814767 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data-custom\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.814807 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03f8a3-0db4-45c7-90d9-6911a23b39c9-logs\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.814833 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9fp\" (UniqueName: \"kubernetes.io/projected/be03f8a3-0db4-45c7-90d9-6911a23b39c9-kube-api-access-gx9fp\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.814863 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.815925 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be03f8a3-0db4-45c7-90d9-6911a23b39c9-logs\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.849429 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data-custom\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.850386 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-config-data\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.863478 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be03f8a3-0db4-45c7-90d9-6911a23b39c9-combined-ca-bundle\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.927651 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f21e33b-cde8-4278-927c-b9566864f208-logs\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.928547 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data-custom\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.928672 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbs9\" (UniqueName: \"kubernetes.io/projected/8f21e33b-cde8-4278-927c-b9566864f208-kube-api-access-9wbs9\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.928713 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.928743 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-combined-ca-bundle\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.960727 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.960870 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9fp\" (UniqueName: \"kubernetes.io/projected/be03f8a3-0db4-45c7-90d9-6911a23b39c9-kube-api-access-gx9fp\") pod \"barbican-worker-ddd9775f-8wm5n\" (UID: \"be03f8a3-0db4-45c7-90d9-6911a23b39c9\") " pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:38 crc kubenswrapper[4847]: I0218 00:48:38.973670 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.010348 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-ddd9775f-8wm5n" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.016666 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042043 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042148 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbs9\" (UniqueName: \"kubernetes.io/projected/8f21e33b-cde8-4278-927c-b9566864f208-kube-api-access-9wbs9\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042209 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042249 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-combined-ca-bundle\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042304 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042335 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042453 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f21e33b-cde8-4278-927c-b9566864f208-logs\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.042681 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlql\" (UniqueName: \"kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.048160 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f21e33b-cde8-4278-927c-b9566864f208-logs\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.052306 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.058399 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-combined-ca-bundle\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.059498 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.059678 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data-custom\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.059732 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.064968 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.082188 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8f21e33b-cde8-4278-927c-b9566864f208-config-data-custom\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.082763 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbs9\" (UniqueName: \"kubernetes.io/projected/8f21e33b-cde8-4278-927c-b9566864f208-kube-api-access-9wbs9\") pod \"barbican-keystone-listener-64755b45d-nv688\" (UID: \"8f21e33b-cde8-4278-927c-b9566864f208\") " pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.106572 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-dxph7"] Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.128141 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-64755b45d-nv688" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.148482 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.150729 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.159270 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161646 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161723 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161750 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161778 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161794 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.161851 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txlql\" (UniqueName: \"kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.162692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.163214 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.163429 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.165905 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.177903 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.183162 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlql\" (UniqueName: \"kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql\") pod \"dnsmasq-dns-848cf88cfc-xm5hm\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.188788 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.264098 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.264442 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.264508 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9h2w\" (UniqueName: \"kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.264545 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.264566 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.305577 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.311511 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerStarted","Data":"e2dc111804a9faef6ff8700a4a8c34574288c869ce3a771223c6787eeb8a0276"} Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.311593 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.311627 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.342593 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-556dbf5b5b-fmjz4" podStartSLOduration=7.342578776 podStartE2EDuration="7.342578776s" podCreationTimestamp="2026-02-18 00:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:39.339196351 +0000 UTC m=+1392.716547293" watchObservedRunningTime="2026-02-18 00:48:39.342578776 +0000 UTC m=+1392.719929718" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.373840 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.373881 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.373943 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9h2w\" (UniqueName: \"kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.373986 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.374013 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.384087 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.384737 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.395270 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.395319 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.402265 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9h2w\" (UniqueName: \"kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w\") pod \"barbican-api-58d7bf495d-sp442\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.461558 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61cc2fbc-ba97-4934-888b-a52b7329727d" path="/var/lib/kubelet/pods/61cc2fbc-ba97-4934-888b-a52b7329727d/volumes" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.567995 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.738624 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-ddd9775f-8wm5n"] Feb 18 00:48:39 crc kubenswrapper[4847]: W0218 00:48:39.748355 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe03f8a3_0db4_45c7_90d9_6911a23b39c9.slice/crio-b074ec34dc70b171b3cc50ae1f9a2540ff85feb5302ee5ab82298b6b032f705d WatchSource:0}: Error finding container b074ec34dc70b171b3cc50ae1f9a2540ff85feb5302ee5ab82298b6b032f705d: Status 404 returned error can't find the container with id b074ec34dc70b171b3cc50ae1f9a2540ff85feb5302ee5ab82298b6b032f705d Feb 18 00:48:39 crc kubenswrapper[4847]: I0218 00:48:39.836506 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-64755b45d-nv688"] Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.028136 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.119597 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.335882 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64755b45d-nv688" event={"ID":"8f21e33b-cde8-4278-927c-b9566864f208","Type":"ContainerStarted","Data":"e80858f2cc304026eb338686726df66b9f512bc09f5d7803c40d7ea1a27d40c6"} Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.338859 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ddd9775f-8wm5n" event={"ID":"be03f8a3-0db4-45c7-90d9-6911a23b39c9","Type":"ContainerStarted","Data":"b074ec34dc70b171b3cc50ae1f9a2540ff85feb5302ee5ab82298b6b032f705d"} Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.345226 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerStarted","Data":"de45f85e9907ea9adfde3111e5df2da33c716158209d3c24c8657efa02b2c31d"} Feb 18 00:48:40 crc kubenswrapper[4847]: I0218 00:48:40.348518 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" event={"ID":"9ff37608-c71f-48aa-9205-8aae29841abb","Type":"ContainerStarted","Data":"14163aa0a955a273d886277c35143ddf51b3bbef9030e7397f12b74d4a001c08"} Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.359207 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerStarted","Data":"3d48adfb40645576d44e295308ca07c745dc85a559e180c16971f09c4d2cc672"} Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.359515 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerStarted","Data":"5b8f6a2ce95895255652bc5fe6164bb458de1e16b2b2234e29a39ca429fb3046"} Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.359559 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.359581 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.366672 4847 generic.go:334] "Generic (PLEG): container finished" podID="9ff37608-c71f-48aa-9205-8aae29841abb" containerID="c41d207ac2e62cb3235773bb4f3fa706c060b52735f066c84765da12d1182c40" exitCode=0 Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.366714 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" event={"ID":"9ff37608-c71f-48aa-9205-8aae29841abb","Type":"ContainerDied","Data":"c41d207ac2e62cb3235773bb4f3fa706c060b52735f066c84765da12d1182c40"} Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.405394 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58d7bf495d-sp442" podStartSLOduration=3.405377285 podStartE2EDuration="3.405377285s" podCreationTimestamp="2026-02-18 00:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:41.393627771 +0000 UTC m=+1394.770978713" watchObservedRunningTime="2026-02-18 00:48:41.405377285 +0000 UTC m=+1394.782728227" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.875890 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6db955874-66wrk"] Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.877666 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.926193 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.926859 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 18 00:48:41 crc kubenswrapper[4847]: I0218 00:48:41.957781 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6db955874-66wrk"] Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.045897 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-combined-ca-bundle\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046189 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-internal-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046309 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data-custom\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046432 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-public-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046537 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046738 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/594db61c-0bfb-44cf-be11-cae6758e9fac-logs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.046911 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnhj\" (UniqueName: \"kubernetes.io/projected/594db61c-0bfb-44cf-be11-cae6758e9fac-kube-api-access-vgnhj\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148787 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-combined-ca-bundle\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148839 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-internal-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148868 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data-custom\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148893 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-public-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148915 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.148962 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/594db61c-0bfb-44cf-be11-cae6758e9fac-logs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.149006 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgnhj\" (UniqueName: \"kubernetes.io/projected/594db61c-0bfb-44cf-be11-cae6758e9fac-kube-api-access-vgnhj\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.149578 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/594db61c-0bfb-44cf-be11-cae6758e9fac-logs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.153300 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-combined-ca-bundle\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.153555 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-internal-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.153955 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-public-tls-certs\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.156153 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data-custom\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.156777 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594db61c-0bfb-44cf-be11-cae6758e9fac-config-data\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.169765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgnhj\" (UniqueName: \"kubernetes.io/projected/594db61c-0bfb-44cf-be11-cae6758e9fac-kube-api-access-vgnhj\") pod \"barbican-api-6db955874-66wrk\" (UID: \"594db61c-0bfb-44cf-be11-cae6758e9fac\") " pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.251770 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:42 crc kubenswrapper[4847]: I0218 00:48:42.884204 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6db955874-66wrk"] Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.429271 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" podStartSLOduration=5.42925248 podStartE2EDuration="5.42925248s" podCreationTimestamp="2026-02-18 00:48:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:43.427331172 +0000 UTC m=+1396.804682124" watchObservedRunningTime="2026-02-18 00:48:43.42925248 +0000 UTC m=+1396.806603422" Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432042 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432096 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" event={"ID":"9ff37608-c71f-48aa-9205-8aae29841abb","Type":"ContainerStarted","Data":"66d57d39a883115ade6701d73afc8016bbeb70be832c3d63a87cb2c56d84697c"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432122 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxdsw" event={"ID":"e40815e0-c0e4-4265-94f8-c9c7b262a011","Type":"ContainerStarted","Data":"b38ffced6b3f4b1b2342c91c61f122a85b0deac1a667511704d69d1ed8f11cf4"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432137 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-znxsz" event={"ID":"014e96ac-8dcb-4d73-a9e1-1ade26742005","Type":"ContainerStarted","Data":"907bac399e438cfe5a24a2d99de0d7cd1b40908df749631f1f3ab4baff3f4744"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432151 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db955874-66wrk" event={"ID":"594db61c-0bfb-44cf-be11-cae6758e9fac","Type":"ContainerStarted","Data":"c6e1f07920ec85fff592e67ad8760f9b219cb3784b501b6f7bc658f0ffde3a81"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432165 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db955874-66wrk" event={"ID":"594db61c-0bfb-44cf-be11-cae6758e9fac","Type":"ContainerStarted","Data":"dcb1bc7e34a55ae610e1acede4bef072ad5e21a988d1772e61f6b6bf089e65c3"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432177 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ddd9775f-8wm5n" event={"ID":"be03f8a3-0db4-45c7-90d9-6911a23b39c9","Type":"ContainerStarted","Data":"619077fb6c3925fb3fe49cb4243e89f6652f4e9ac96fb12848abcf97e60d2967"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.432190 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-ddd9775f-8wm5n" event={"ID":"be03f8a3-0db4-45c7-90d9-6911a23b39c9","Type":"ContainerStarted","Data":"13a17cacfba9d4e924ed653444db72330853b725eaabd93add8084c1db21d376"} Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.453446 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-ddd9775f-8wm5n" podStartSLOduration=3.178123478 podStartE2EDuration="5.453429565s" podCreationTimestamp="2026-02-18 00:48:38 +0000 UTC" firstStartedPulling="2026-02-18 00:48:39.755393506 +0000 UTC m=+1393.132744448" lastFinishedPulling="2026-02-18 00:48:42.030699593 +0000 UTC m=+1395.408050535" observedRunningTime="2026-02-18 00:48:43.449516517 +0000 UTC m=+1396.826867459" watchObservedRunningTime="2026-02-18 00:48:43.453429565 +0000 UTC m=+1396.830780507" Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.473974 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qxdsw" podStartSLOduration=3.764389963 podStartE2EDuration="45.473953089s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="2026-02-18 00:48:00.320790058 +0000 UTC m=+1353.698141000" lastFinishedPulling="2026-02-18 00:48:42.030353184 +0000 UTC m=+1395.407704126" observedRunningTime="2026-02-18 00:48:43.464137633 +0000 UTC m=+1396.841488575" watchObservedRunningTime="2026-02-18 00:48:43.473953089 +0000 UTC m=+1396.851304031" Feb 18 00:48:43 crc kubenswrapper[4847]: I0218 00:48:43.488044 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-znxsz" podStartSLOduration=4.469628817 podStartE2EDuration="45.488025871s" podCreationTimestamp="2026-02-18 00:47:58 +0000 UTC" firstStartedPulling="2026-02-18 00:47:59.795783417 +0000 UTC m=+1353.173134359" lastFinishedPulling="2026-02-18 00:48:40.814180471 +0000 UTC m=+1394.191531413" observedRunningTime="2026-02-18 00:48:43.4867689 +0000 UTC m=+1396.864119842" watchObservedRunningTime="2026-02-18 00:48:43.488025871 +0000 UTC m=+1396.865376813" Feb 18 00:48:47 crc kubenswrapper[4847]: I0218 00:48:47.655152 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58d7bf495d-sp442" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.306784 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.379500 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.379960 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="dnsmasq-dns" containerID="cri-o://2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54" gracePeriod=10 Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.535405 4847 generic.go:334] "Generic (PLEG): container finished" podID="014e96ac-8dcb-4d73-a9e1-1ade26742005" containerID="907bac399e438cfe5a24a2d99de0d7cd1b40908df749631f1f3ab4baff3f4744" exitCode=0 Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.535790 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-znxsz" event={"ID":"014e96ac-8dcb-4d73-a9e1-1ade26742005","Type":"ContainerDied","Data":"907bac399e438cfe5a24a2d99de0d7cd1b40908df749631f1f3ab4baff3f4744"} Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.538891 4847 generic.go:334] "Generic (PLEG): container finished" podID="e40815e0-c0e4-4265-94f8-c9c7b262a011" containerID="b38ffced6b3f4b1b2342c91c61f122a85b0deac1a667511704d69d1ed8f11cf4" exitCode=0 Feb 18 00:48:49 crc kubenswrapper[4847]: I0218 00:48:49.538942 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxdsw" event={"ID":"e40815e0-c0e4-4265-94f8-c9c7b262a011","Type":"ContainerDied","Data":"b38ffced6b3f4b1b2342c91c61f122a85b0deac1a667511704d69d1ed8f11cf4"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.063482 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225261 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225398 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225438 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225456 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225499 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pkgf\" (UniqueName: \"kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.225576 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config\") pod \"3aface83-1656-4958-b676-04bd0f99b9ac\" (UID: \"3aface83-1656-4958-b676-04bd0f99b9ac\") " Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.240884 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf" (OuterVolumeSpecName: "kube-api-access-8pkgf") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "kube-api-access-8pkgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.284910 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.288015 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config" (OuterVolumeSpecName: "config") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.311027 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.312675 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.324304 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3aface83-1656-4958-b676-04bd0f99b9ac" (UID: "3aface83-1656-4958-b676-04bd0f99b9ac"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328172 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328212 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328225 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328236 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328246 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pkgf\" (UniqueName: \"kubernetes.io/projected/3aface83-1656-4958-b676-04bd0f99b9ac-kube-api-access-8pkgf\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.328256 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aface83-1656-4958-b676-04bd0f99b9ac-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.552478 4847 generic.go:334] "Generic (PLEG): container finished" podID="3aface83-1656-4958-b676-04bd0f99b9ac" containerID="2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54" exitCode=0 Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.552555 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" event={"ID":"3aface83-1656-4958-b676-04bd0f99b9ac","Type":"ContainerDied","Data":"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.552641 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" event={"ID":"3aface83-1656-4958-b676-04bd0f99b9ac","Type":"ContainerDied","Data":"45e0a5ca26d24996a4a6e156e2fb58f2698f896f0f8079dc7981ba6357f2d3e7"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.552594 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-l29m5" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.552671 4847 scope.go:117] "RemoveContainer" containerID="2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.555195 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6db955874-66wrk" event={"ID":"594db61c-0bfb-44cf-be11-cae6758e9fac","Type":"ContainerStarted","Data":"4900b1595fb1933bdf01ee002bddf178e315d56eb7a819928385dea5a9aee431"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.555705 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.555732 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.566584 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerStarted","Data":"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.566833 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-central-agent" containerID="cri-o://a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12" gracePeriod=30 Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.567132 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.567189 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="proxy-httpd" containerID="cri-o://4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21" gracePeriod=30 Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.567253 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="sg-core" containerID="cri-o://79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f" gracePeriod=30 Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.567312 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-notification-agent" containerID="cri-o://677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291" gracePeriod=30 Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.574678 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64755b45d-nv688" event={"ID":"8f21e33b-cde8-4278-927c-b9566864f208","Type":"ContainerStarted","Data":"1eef48f0d86d7ed56ac4b0dbdcd46944413e1ca5520d8585c9fef44c0c9790ee"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.574753 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-64755b45d-nv688" event={"ID":"8f21e33b-cde8-4278-927c-b9566864f208","Type":"ContainerStarted","Data":"10dac1d2955896f54d2a403e512a27cdca3d08a98b4cfc61cc49ee4e591054e9"} Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.595680 4847 scope.go:117] "RemoveContainer" containerID="8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.618062 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6db955874-66wrk" podStartSLOduration=9.618034871999999 podStartE2EDuration="9.618034872s" podCreationTimestamp="2026-02-18 00:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:50.588995575 +0000 UTC m=+1403.966346517" watchObservedRunningTime="2026-02-18 00:48:50.618034872 +0000 UTC m=+1403.995385814" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.634183 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6827846749999997 podStartE2EDuration="51.634163865s" podCreationTimestamp="2026-02-18 00:47:59 +0000 UTC" firstStartedPulling="2026-02-18 00:48:00.458760265 +0000 UTC m=+1353.836111207" lastFinishedPulling="2026-02-18 00:48:49.410139455 +0000 UTC m=+1402.787490397" observedRunningTime="2026-02-18 00:48:50.625268813 +0000 UTC m=+1404.002619755" watchObservedRunningTime="2026-02-18 00:48:50.634163865 +0000 UTC m=+1404.011514807" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.655111 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-64755b45d-nv688" podStartSLOduration=3.084787643 podStartE2EDuration="12.655092999s" podCreationTimestamp="2026-02-18 00:48:38 +0000 UTC" firstStartedPulling="2026-02-18 00:48:39.844499936 +0000 UTC m=+1393.221850868" lastFinishedPulling="2026-02-18 00:48:49.414805272 +0000 UTC m=+1402.792156224" observedRunningTime="2026-02-18 00:48:50.648362661 +0000 UTC m=+1404.025713603" watchObservedRunningTime="2026-02-18 00:48:50.655092999 +0000 UTC m=+1404.032443941" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.694422 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.729766 4847 scope.go:117] "RemoveContainer" containerID="2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54" Feb 18 00:48:50 crc kubenswrapper[4847]: E0218 00:48:50.731379 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54\": container with ID starting with 2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54 not found: ID does not exist" containerID="2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.731424 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54"} err="failed to get container status \"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54\": rpc error: code = NotFound desc = could not find container \"2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54\": container with ID starting with 2114ec12f80bef88d6573ea0e1d0a6ca9ce346d90d878f1f837ef8cd45ee4e54 not found: ID does not exist" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.731448 4847 scope.go:117] "RemoveContainer" containerID="8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821" Feb 18 00:48:50 crc kubenswrapper[4847]: E0218 00:48:50.735809 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821\": container with ID starting with 8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821 not found: ID does not exist" containerID="8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.735842 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821"} err="failed to get container status \"8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821\": rpc error: code = NotFound desc = could not find container \"8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821\": container with ID starting with 8794edf0ded4f806c6495566bc7cf8876155986b2ac36ac80bb7b0c7b6bb6821 not found: ID does not exist" Feb 18 00:48:50 crc kubenswrapper[4847]: I0218 00:48:50.820045 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-l29m5"] Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.145523 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.207053 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-znxsz" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.251930 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252021 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252115 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252188 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252211 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4ps\" (UniqueName: \"kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252229 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252313 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle\") pod \"e40815e0-c0e4-4265-94f8-c9c7b262a011\" (UID: \"e40815e0-c0e4-4265-94f8-c9c7b262a011\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.252681 4847 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e40815e0-c0e4-4265-94f8-c9c7b262a011-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.258989 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts" (OuterVolumeSpecName: "scripts") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.261904 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.268814 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps" (OuterVolumeSpecName: "kube-api-access-xf4ps") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "kube-api-access-xf4ps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.298745 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.347747 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data" (OuterVolumeSpecName: "config-data") pod "e40815e0-c0e4-4265-94f8-c9c7b262a011" (UID: "e40815e0-c0e4-4265-94f8-c9c7b262a011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.354245 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh7hb\" (UniqueName: \"kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb\") pod \"014e96ac-8dcb-4d73-a9e1-1ade26742005\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.354484 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data\") pod \"014e96ac-8dcb-4d73-a9e1-1ade26742005\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.354529 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle\") pod \"014e96ac-8dcb-4d73-a9e1-1ade26742005\" (UID: \"014e96ac-8dcb-4d73-a9e1-1ade26742005\") " Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.355116 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.355129 4847 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.355139 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4ps\" (UniqueName: \"kubernetes.io/projected/e40815e0-c0e4-4265-94f8-c9c7b262a011-kube-api-access-xf4ps\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.355148 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.355156 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40815e0-c0e4-4265-94f8-c9c7b262a011-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.358057 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb" (OuterVolumeSpecName: "kube-api-access-zh7hb") pod "014e96ac-8dcb-4d73-a9e1-1ade26742005" (UID: "014e96ac-8dcb-4d73-a9e1-1ade26742005"). InnerVolumeSpecName "kube-api-access-zh7hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.393705 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "014e96ac-8dcb-4d73-a9e1-1ade26742005" (UID: "014e96ac-8dcb-4d73-a9e1-1ade26742005"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.435080 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" path="/var/lib/kubelet/pods/3aface83-1656-4958-b676-04bd0f99b9ac/volumes" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.458176 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.458214 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh7hb\" (UniqueName: \"kubernetes.io/projected/014e96ac-8dcb-4d73-a9e1-1ade26742005-kube-api-access-zh7hb\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.462763 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data" (OuterVolumeSpecName: "config-data") pod "014e96ac-8dcb-4d73-a9e1-1ade26742005" (UID: "014e96ac-8dcb-4d73-a9e1-1ade26742005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.560411 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/014e96ac-8dcb-4d73-a9e1-1ade26742005-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.562467 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585061 4847 generic.go:334] "Generic (PLEG): container finished" podID="a612e518-e7f5-4c88-8534-16768f748bed" containerID="4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21" exitCode=0 Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585095 4847 generic.go:334] "Generic (PLEG): container finished" podID="a612e518-e7f5-4c88-8534-16768f748bed" containerID="79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f" exitCode=2 Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585104 4847 generic.go:334] "Generic (PLEG): container finished" podID="a612e518-e7f5-4c88-8534-16768f748bed" containerID="a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12" exitCode=0 Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585147 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerDied","Data":"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21"} Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585487 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerDied","Data":"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f"} Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.585504 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerDied","Data":"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12"} Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.586950 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qxdsw" event={"ID":"e40815e0-c0e4-4265-94f8-c9c7b262a011","Type":"ContainerDied","Data":"f97906abe9d3fb6041ad055e8c7aed037198bba5e9a2a805ca267b749fc0b43d"} Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.586974 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97906abe9d3fb6041ad055e8c7aed037198bba5e9a2a805ca267b749fc0b43d" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.587044 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qxdsw" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.589548 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-znxsz" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.590561 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-znxsz" event={"ID":"014e96ac-8dcb-4d73-a9e1-1ade26742005","Type":"ContainerDied","Data":"d6fca2e113bde5aa3bc71762b848d25e61025ebf3e5fe0246606f6e7e8f65367"} Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.590639 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6fca2e113bde5aa3bc71762b848d25e61025ebf3e5fe0246606f6e7e8f65367" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.745973 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.878651 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:48:51 crc kubenswrapper[4847]: E0218 00:48:51.879411 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="init" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879428 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="init" Feb 18 00:48:51 crc kubenswrapper[4847]: E0218 00:48:51.879442 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="dnsmasq-dns" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879449 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="dnsmasq-dns" Feb 18 00:48:51 crc kubenswrapper[4847]: E0218 00:48:51.879464 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" containerName="cinder-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879471 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" containerName="cinder-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: E0218 00:48:51.879486 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" containerName="heat-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879493 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" containerName="heat-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879661 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" containerName="heat-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879674 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" containerName="cinder-db-sync" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.879687 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aface83-1656-4958-b676-04bd0f99b9ac" containerName="dnsmasq-dns" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.880709 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.882123 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kxl6j" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.882483 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.882811 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.891930 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.903381 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969736 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969819 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969856 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969885 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969909 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.969956 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tbz\" (UniqueName: \"kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.997066 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:48:51 crc kubenswrapper[4847]: I0218 00:48:51.998911 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.010370 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071738 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071825 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071863 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071895 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071927 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.071971 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86tbz\" (UniqueName: \"kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.072321 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.083849 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.086107 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.086241 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.090103 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.104476 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86tbz\" (UniqueName: \"kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz\") pod \"cinder-scheduler-0\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.171274 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173473 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173618 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173656 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173695 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jzv\" (UniqueName: \"kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173717 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.173786 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.187334 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.191007 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.206861 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.229238 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.274959 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275007 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275034 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275087 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwkw\" (UniqueName: \"kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275107 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275127 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275204 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275224 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275242 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275263 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275282 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275303 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jzv\" (UniqueName: \"kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.275325 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.277377 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.277401 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.277993 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.278040 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.278577 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.302545 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jzv\" (UniqueName: \"kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv\") pod \"dnsmasq-dns-6578955fd5-cvlv7\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.339351 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382724 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382779 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382828 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mwkw\" (UniqueName: \"kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382853 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382918 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382939 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.382958 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.385035 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.385189 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.389960 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.408563 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.430323 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.434845 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.556742 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mwkw\" (UniqueName: \"kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw\") pod \"cinder-api-0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.807522 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:48:52 crc kubenswrapper[4847]: I0218 00:48:52.947593 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.043039 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:48:53 crc kubenswrapper[4847]: W0218 00:48:53.045200 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod836d884c_054f_4eb6_93ef_1d6361564b01.slice/crio-089e64e8108cd3d6270b30ecc5ddd6aaed15434361b6f4c937fe10fff8e2fbb8 WatchSource:0}: Error finding container 089e64e8108cd3d6270b30ecc5ddd6aaed15434361b6f4c937fe10fff8e2fbb8: Status 404 returned error can't find the container with id 089e64e8108cd3d6270b30ecc5ddd6aaed15434361b6f4c937fe10fff8e2fbb8 Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.202488 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:48:53 crc kubenswrapper[4847]: W0218 00:48:53.377711 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40db5dc9_34a9_467b_9617_56ee9fc2d7e0.slice/crio-b23095b7a3603248482da7464c08be77961c3150ae02f66c700facb4a510fc2b WatchSource:0}: Error finding container b23095b7a3603248482da7464c08be77961c3150ae02f66c700facb4a510fc2b: Status 404 returned error can't find the container with id b23095b7a3603248482da7464c08be77961c3150ae02f66c700facb4a510fc2b Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.379366 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.652715 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerStarted","Data":"089e64e8108cd3d6270b30ecc5ddd6aaed15434361b6f4c937fe10fff8e2fbb8"} Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.655741 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerStarted","Data":"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665"} Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.655766 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerStarted","Data":"bc619f389e8d313cb38817e3901f1865faf4fce364fe2b72e7c3737f1ac5156f"} Feb 18 00:48:53 crc kubenswrapper[4847]: I0218 00:48:53.667122 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerStarted","Data":"b23095b7a3603248482da7464c08be77961c3150ae02f66c700facb4a510fc2b"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.525612 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564265 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564431 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564485 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564509 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564540 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564664 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.564733 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chhx7\" (UniqueName: \"kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7\") pod \"a612e518-e7f5-4c88-8534-16768f748bed\" (UID: \"a612e518-e7f5-4c88-8534-16768f748bed\") " Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.566997 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.567202 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.575681 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts" (OuterVolumeSpecName: "scripts") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.595870 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7" (OuterVolumeSpecName: "kube-api-access-chhx7") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "kube-api-access-chhx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.634071 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.666779 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.666820 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chhx7\" (UniqueName: \"kubernetes.io/projected/a612e518-e7f5-4c88-8534-16768f748bed-kube-api-access-chhx7\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.666836 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.666847 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.666859 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a612e518-e7f5-4c88-8534-16768f748bed-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.691439 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerStarted","Data":"003c966fbf199b63c19450634eb83278a298cea276426ff78c3165aa9057e396"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.694513 4847 generic.go:334] "Generic (PLEG): container finished" podID="a612e518-e7f5-4c88-8534-16768f748bed" containerID="677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291" exitCode=0 Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.694629 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.694704 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerDied","Data":"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.694764 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a612e518-e7f5-4c88-8534-16768f748bed","Type":"ContainerDied","Data":"dbab888e590aa294fd77c7932937cbffe2815550f7457b0d0f55ac95edfc1d6c"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.694785 4847 scope.go:117] "RemoveContainer" containerID="4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.703701 4847 generic.go:334] "Generic (PLEG): container finished" podID="e0a31394-e534-4372-9f15-344df4565d6a" containerID="4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665" exitCode=0 Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.703744 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerDied","Data":"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.703770 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerStarted","Data":"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97"} Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.703872 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.734738 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" podStartSLOduration=3.734716897 podStartE2EDuration="3.734716897s" podCreationTimestamp="2026-02-18 00:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:54.727376154 +0000 UTC m=+1408.104727106" watchObservedRunningTime="2026-02-18 00:48:54.734716897 +0000 UTC m=+1408.112067839" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.742968 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.759406 4847 scope.go:117] "RemoveContainer" containerID="79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.769413 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.785965 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data" (OuterVolumeSpecName: "config-data") pod "a612e518-e7f5-4c88-8534-16768f748bed" (UID: "a612e518-e7f5-4c88-8534-16768f748bed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.809480 4847 scope.go:117] "RemoveContainer" containerID="677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.880991 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a612e518-e7f5-4c88-8534-16768f748bed-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.884474 4847 scope.go:117] "RemoveContainer" containerID="a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.921808 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.945953 4847 scope.go:117] "RemoveContainer" containerID="4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21" Feb 18 00:48:54 crc kubenswrapper[4847]: E0218 00:48:54.946522 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21\": container with ID starting with 4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21 not found: ID does not exist" containerID="4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.946589 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21"} err="failed to get container status \"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21\": rpc error: code = NotFound desc = could not find container \"4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21\": container with ID starting with 4778c2c18fbd393b728852821d8ac4b98b25a50062108e85a4bd2590b961ed21 not found: ID does not exist" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.946648 4847 scope.go:117] "RemoveContainer" containerID="79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f" Feb 18 00:48:54 crc kubenswrapper[4847]: E0218 00:48:54.947103 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f\": container with ID starting with 79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f not found: ID does not exist" containerID="79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.947135 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f"} err="failed to get container status \"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f\": rpc error: code = NotFound desc = could not find container \"79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f\": container with ID starting with 79647fde066f83ce069c92c33723f7503bea64ac6946f516bb71192d81bfcb8f not found: ID does not exist" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.947160 4847 scope.go:117] "RemoveContainer" containerID="677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291" Feb 18 00:48:54 crc kubenswrapper[4847]: E0218 00:48:54.947529 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291\": container with ID starting with 677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291 not found: ID does not exist" containerID="677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.947583 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291"} err="failed to get container status \"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291\": rpc error: code = NotFound desc = could not find container \"677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291\": container with ID starting with 677009da1af8e905666a7aeef540e1f94bb0fe8f2689e0098dbb4929c4cd3291 not found: ID does not exist" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.947716 4847 scope.go:117] "RemoveContainer" containerID="a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12" Feb 18 00:48:54 crc kubenswrapper[4847]: E0218 00:48:54.948304 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12\": container with ID starting with a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12 not found: ID does not exist" containerID="a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12" Feb 18 00:48:54 crc kubenswrapper[4847]: I0218 00:48:54.948340 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12"} err="failed to get container status \"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12\": rpc error: code = NotFound desc = could not find container \"a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12\": container with ID starting with a4faf8561254c979e53592ccd32511604ad40f95d92cab677e778c8ef0fb8e12 not found: ID does not exist" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.061697 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.077289 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.101658 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:55 crc kubenswrapper[4847]: E0218 00:48:55.102054 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-notification-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102069 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-notification-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: E0218 00:48:55.102081 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-central-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102087 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-central-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: E0218 00:48:55.102095 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="sg-core" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102101 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="sg-core" Feb 18 00:48:55 crc kubenswrapper[4847]: E0218 00:48:55.102124 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="proxy-httpd" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102131 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="proxy-httpd" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102309 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-notification-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102334 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="proxy-httpd" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102342 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="ceilometer-central-agent" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.102352 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a612e518-e7f5-4c88-8534-16768f748bed" containerName="sg-core" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.116761 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.123851 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.124043 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.159425 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189584 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189694 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189723 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189760 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189794 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7lz\" (UniqueName: \"kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.189839 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.302888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.302974 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.303025 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.303077 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.303121 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7lz\" (UniqueName: \"kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.303147 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.303206 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.304184 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.304232 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.308168 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.308658 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.327370 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.328011 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.345234 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7lz\" (UniqueName: \"kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz\") pod \"ceilometer-0\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.448124 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.469523 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a612e518-e7f5-4c88-8534-16768f748bed" path="/var/lib/kubelet/pods/a612e518-e7f5-4c88-8534-16768f748bed/volumes" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.723321 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerStarted","Data":"c2505a71abb7818cc5fd08322fc690f6d6a1d58559dc3464a68aa9d724835339"} Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.723738 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api-log" containerID="cri-o://003c966fbf199b63c19450634eb83278a298cea276426ff78c3165aa9057e396" gracePeriod=30 Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.724006 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.724260 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api" containerID="cri-o://c2505a71abb7818cc5fd08322fc690f6d6a1d58559dc3464a68aa9d724835339" gracePeriod=30 Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.727383 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerStarted","Data":"47953f526fed4828233ef4a350bf871c0962232abcb2f7f3ece7325b5e00b206"} Feb 18 00:48:55 crc kubenswrapper[4847]: I0218 00:48:55.751549 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.751526321 podStartE2EDuration="3.751526321s" podCreationTimestamp="2026-02-18 00:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:55.748058915 +0000 UTC m=+1409.125409857" watchObservedRunningTime="2026-02-18 00:48:55.751526321 +0000 UTC m=+1409.128877253" Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.084507 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:48:56 crc kubenswrapper[4847]: W0218 00:48:56.089941 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb65fe68c_6cd7_4a94_8d02_1c84419628d5.slice/crio-5ef17493e07fa65fe8f3be3ee75f8fc508bfe475d71bea4c4df84719901d6f80 WatchSource:0}: Error finding container 5ef17493e07fa65fe8f3be3ee75f8fc508bfe475d71bea4c4df84719901d6f80: Status 404 returned error can't find the container with id 5ef17493e07fa65fe8f3be3ee75f8fc508bfe475d71bea4c4df84719901d6f80 Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.737678 4847 generic.go:334] "Generic (PLEG): container finished" podID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerID="003c966fbf199b63c19450634eb83278a298cea276426ff78c3165aa9057e396" exitCode=143 Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.737762 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerDied","Data":"003c966fbf199b63c19450634eb83278a298cea276426ff78c3165aa9057e396"} Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.742191 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerStarted","Data":"07f34af9957a545eab1522b4d41d8f3f7498969870d2e1e94bf1b969165b1fb0"} Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.748143 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerStarted","Data":"f19c9fb397bc77a646bef92be811569a6ac42633a72a017f03ab0854c9a4e2d1"} Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.748184 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerStarted","Data":"5ef17493e07fa65fe8f3be3ee75f8fc508bfe475d71bea4c4df84719901d6f80"} Feb 18 00:48:56 crc kubenswrapper[4847]: I0218 00:48:56.765506 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.008439561 podStartE2EDuration="5.765485215s" podCreationTimestamp="2026-02-18 00:48:51 +0000 UTC" firstStartedPulling="2026-02-18 00:48:53.05203954 +0000 UTC m=+1406.429390482" lastFinishedPulling="2026-02-18 00:48:53.809085194 +0000 UTC m=+1407.186436136" observedRunningTime="2026-02-18 00:48:56.758964102 +0000 UTC m=+1410.136315044" watchObservedRunningTime="2026-02-18 00:48:56.765485215 +0000 UTC m=+1410.142836157" Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.207291 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.632702 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.785530 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerStarted","Data":"0c7f17ecf6ee8c806ed10be9a55ccd029260d8a7c3ad803972003c3758d9d6bb"} Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.904809 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.905109 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67dc676569-x5csl" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-api" containerID="cri-o://cbb7fc726aaf64f862df797728b8e837eb204434fc78233e5b929f74f594f2f4" gracePeriod=30 Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.906296 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67dc676569-x5csl" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" containerID="cri-o://98c3847fc17c2dcdb878f91215424724bd9eeef26f6a6c0ba24055626060a239" gracePeriod=30 Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.951933 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bf6d8bf75-gfz9n"] Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.953513 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:57 crc kubenswrapper[4847]: I0218 00:48:57.976518 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf6d8bf75-gfz9n"] Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.023459 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-67dc676569-x5csl" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.180:9696/\": read tcp 10.217.0.2:50034->10.217.0.180:9696: read: connection reset by peer" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077399 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhwlh\" (UniqueName: \"kubernetes.io/projected/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-kube-api-access-fhwlh\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077698 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-public-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077723 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-internal-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077767 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-ovndb-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077809 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-httpd-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077839 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-combined-ca-bundle\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.077886 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.179841 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhwlh\" (UniqueName: \"kubernetes.io/projected/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-kube-api-access-fhwlh\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.179908 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-public-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.179942 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-internal-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.180004 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-ovndb-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.180063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-httpd-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.180100 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-combined-ca-bundle\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.180170 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.185989 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-internal-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.187472 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-ovndb-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.187692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-public-tls-certs\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.192241 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-httpd-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.196657 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-config\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.201268 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhwlh\" (UniqueName: \"kubernetes.io/projected/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-kube-api-access-fhwlh\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.203721 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/660f92ea-ca1f-410b-b9f2-d42b2343e1d3-combined-ca-bundle\") pod \"neutron-bf6d8bf75-gfz9n\" (UID: \"660f92ea-ca1f-410b-b9f2-d42b2343e1d3\") " pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.286675 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.723040 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.725562 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.736447 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.794718 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsqzb\" (UniqueName: \"kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.794784 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.794823 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.800456 4847 generic.go:334] "Generic (PLEG): container finished" podID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerID="98c3847fc17c2dcdb878f91215424724bd9eeef26f6a6c0ba24055626060a239" exitCode=0 Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.800505 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerDied","Data":"98c3847fc17c2dcdb878f91215424724bd9eeef26f6a6c0ba24055626060a239"} Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.803638 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerStarted","Data":"564465d8075ee018b487799132a846e25de7acb794f1c7fc38c273a5d7ea0862"} Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.896084 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsqzb\" (UniqueName: \"kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.896378 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.896415 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.897955 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.898149 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:58 crc kubenswrapper[4847]: I0218 00:48:58.916349 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsqzb\" (UniqueName: \"kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb\") pod \"redhat-operators-s7pbj\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.078748 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.135922 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bf6d8bf75-gfz9n"] Feb 18 00:48:59 crc kubenswrapper[4847]: W0218 00:48:59.150177 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod660f92ea_ca1f_410b_b9f2_d42b2343e1d3.slice/crio-d40cf14edc3fc53d66929a6ef88e9ef52324b299167ded57beb33a361554a34d WatchSource:0}: Error finding container d40cf14edc3fc53d66929a6ef88e9ef52324b299167ded57beb33a361554a34d: Status 404 returned error can't find the container with id d40cf14edc3fc53d66929a6ef88e9ef52324b299167ded57beb33a361554a34d Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.151162 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6db955874-66wrk" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.249420 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.253553 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58d7bf495d-sp442" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api-log" containerID="cri-o://5b8f6a2ce95895255652bc5fe6164bb458de1e16b2b2234e29a39ca429fb3046" gracePeriod=30 Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.253727 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58d7bf495d-sp442" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" containerID="cri-o://3d48adfb40645576d44e295308ca07c745dc85a559e180c16971f09c4d2cc672" gracePeriod=30 Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.711741 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.833868 4847 generic.go:334] "Generic (PLEG): container finished" podID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerID="5b8f6a2ce95895255652bc5fe6164bb458de1e16b2b2234e29a39ca429fb3046" exitCode=143 Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.833930 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerDied","Data":"5b8f6a2ce95895255652bc5fe6164bb458de1e16b2b2234e29a39ca429fb3046"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.854621 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf6d8bf75-gfz9n" event={"ID":"660f92ea-ca1f-410b-b9f2-d42b2343e1d3","Type":"ContainerStarted","Data":"94eb8acd249246d09a2ce6008116a6873dc7d0a26f3b6c53cbdfd29e48045f29"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.854684 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf6d8bf75-gfz9n" event={"ID":"660f92ea-ca1f-410b-b9f2-d42b2343e1d3","Type":"ContainerStarted","Data":"ba7e32511332c64ac5d00b2ab68f5e0bd327c64c326e0a20853206bf9af9463c"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.854700 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bf6d8bf75-gfz9n" event={"ID":"660f92ea-ca1f-410b-b9f2-d42b2343e1d3","Type":"ContainerStarted","Data":"d40cf14edc3fc53d66929a6ef88e9ef52324b299167ded57beb33a361554a34d"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.854733 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.874565 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerStarted","Data":"ec39ee5f634c877e26b18d94affe0edad0d99af50d157477f2a6dd96452cb94b"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.874953 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.877353 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerStarted","Data":"9a9c80be7ca4d1eb9f517510e1aae5cb55d5dffdce4014ca750a9b3d85d0eac9"} Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.918334 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bf6d8bf75-gfz9n" podStartSLOduration=2.918310181 podStartE2EDuration="2.918310181s" podCreationTimestamp="2026-02-18 00:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:48:59.902466374 +0000 UTC m=+1413.279817316" watchObservedRunningTime="2026-02-18 00:48:59.918310181 +0000 UTC m=+1413.295661123" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.940381 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-67dc676569-x5csl" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.180:9696/\": dial tcp 10.217.0.180:9696: connect: connection refused" Feb 18 00:48:59 crc kubenswrapper[4847]: I0218 00:48:59.947170 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5888987559999999 podStartE2EDuration="4.947150362s" podCreationTimestamp="2026-02-18 00:48:55 +0000 UTC" firstStartedPulling="2026-02-18 00:48:56.093217002 +0000 UTC m=+1409.470567964" lastFinishedPulling="2026-02-18 00:48:59.451468628 +0000 UTC m=+1412.828819570" observedRunningTime="2026-02-18 00:48:59.920794253 +0000 UTC m=+1413.298145195" watchObservedRunningTime="2026-02-18 00:48:59.947150362 +0000 UTC m=+1413.324501304" Feb 18 00:49:00 crc kubenswrapper[4847]: I0218 00:49:00.888714 4847 generic.go:334] "Generic (PLEG): container finished" podID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerID="4f8f3d5269e21eee2a0effe033a8dfe0bec0cdc52888ef4c27679ff4128f5235" exitCode=0 Feb 18 00:49:00 crc kubenswrapper[4847]: I0218 00:49:00.888915 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerDied","Data":"4f8f3d5269e21eee2a0effe033a8dfe0bec0cdc52888ef4c27679ff4128f5235"} Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.342861 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.421008 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.421260 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="dnsmasq-dns" containerID="cri-o://66d57d39a883115ade6701d73afc8016bbeb70be832c3d63a87cb2c56d84697c" gracePeriod=10 Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.592802 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58d7bf495d-sp442" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:45842->10.217.0.186:9311: read: connection reset by peer" Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.593178 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58d7bf495d-sp442" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:45834->10.217.0.186:9311: read: connection reset by peer" Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.936615 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.942170 4847 generic.go:334] "Generic (PLEG): container finished" podID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerID="3d48adfb40645576d44e295308ca07c745dc85a559e180c16971f09c4d2cc672" exitCode=0 Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.942261 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerDied","Data":"3d48adfb40645576d44e295308ca07c745dc85a559e180c16971f09c4d2cc672"} Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.964449 4847 generic.go:334] "Generic (PLEG): container finished" podID="9ff37608-c71f-48aa-9205-8aae29841abb" containerID="66d57d39a883115ade6701d73afc8016bbeb70be832c3d63a87cb2c56d84697c" exitCode=0 Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.964518 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" event={"ID":"9ff37608-c71f-48aa-9205-8aae29841abb","Type":"ContainerDied","Data":"66d57d39a883115ade6701d73afc8016bbeb70be832c3d63a87cb2c56d84697c"} Feb 18 00:49:02 crc kubenswrapper[4847]: I0218 00:49:02.983900 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerStarted","Data":"398cc9552940ace5c4e55575e374e53d5d203d21a5704b88eac8d9dd803d35e2"} Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.005178 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.108175 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.148292 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.162582 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.251306 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpvc7\" (UniqueName: \"kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.251475 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.251538 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.345862 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.353339 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpvc7\" (UniqueName: \"kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.353404 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.353434 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.354133 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.354144 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.417476 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpvc7\" (UniqueName: \"kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7\") pod \"community-operators-jt7tq\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.455942 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.456022 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txlql\" (UniqueName: \"kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.456099 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.456140 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.456293 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.456354 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb\") pod \"9ff37608-c71f-48aa-9205-8aae29841abb\" (UID: \"9ff37608-c71f-48aa-9205-8aae29841abb\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.517826 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql" (OuterVolumeSpecName: "kube-api-access-txlql") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "kube-api-access-txlql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.559078 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txlql\" (UniqueName: \"kubernetes.io/projected/9ff37608-c71f-48aa-9205-8aae29841abb-kube-api-access-txlql\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.606344 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config" (OuterVolumeSpecName: "config") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.635130 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.638218 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.661254 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.682077 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.702410 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.702589 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.724087 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9ff37608-c71f-48aa-9205-8aae29841abb" (UID: "9ff37608-c71f-48aa-9205-8aae29841abb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.765950 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs\") pod \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766068 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9h2w\" (UniqueName: \"kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w\") pod \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766122 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle\") pod \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766219 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom\") pod \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766295 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data\") pod \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\" (UID: \"015b3baa-45c2-4f4e-88d3-2aa917d3578c\") " Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766862 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766895 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766911 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.766922 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ff37608-c71f-48aa-9205-8aae29841abb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.767704 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs" (OuterVolumeSpecName: "logs") pod "015b3baa-45c2-4f4e-88d3-2aa917d3578c" (UID: "015b3baa-45c2-4f4e-88d3-2aa917d3578c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.773845 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w" (OuterVolumeSpecName: "kube-api-access-q9h2w") pod "015b3baa-45c2-4f4e-88d3-2aa917d3578c" (UID: "015b3baa-45c2-4f4e-88d3-2aa917d3578c"). InnerVolumeSpecName "kube-api-access-q9h2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.776381 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "015b3baa-45c2-4f4e-88d3-2aa917d3578c" (UID: "015b3baa-45c2-4f4e-88d3-2aa917d3578c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.833071 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "015b3baa-45c2-4f4e-88d3-2aa917d3578c" (UID: "015b3baa-45c2-4f4e-88d3-2aa917d3578c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.870067 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.870100 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.870111 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015b3baa-45c2-4f4e-88d3-2aa917d3578c-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.870121 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9h2w\" (UniqueName: \"kubernetes.io/projected/015b3baa-45c2-4f4e-88d3-2aa917d3578c-kube-api-access-q9h2w\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.888917 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data" (OuterVolumeSpecName: "config-data") pod "015b3baa-45c2-4f4e-88d3-2aa917d3578c" (UID: "015b3baa-45c2-4f4e-88d3-2aa917d3578c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:03 crc kubenswrapper[4847]: I0218 00:49:03.976435 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015b3baa-45c2-4f4e-88d3-2aa917d3578c-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.040754 4847 generic.go:334] "Generic (PLEG): container finished" podID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerID="398cc9552940ace5c4e55575e374e53d5d203d21a5704b88eac8d9dd803d35e2" exitCode=0 Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.041086 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerDied","Data":"398cc9552940ace5c4e55575e374e53d5d203d21a5704b88eac8d9dd803d35e2"} Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.052260 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d7bf495d-sp442" event={"ID":"015b3baa-45c2-4f4e-88d3-2aa917d3578c","Type":"ContainerDied","Data":"de45f85e9907ea9adfde3111e5df2da33c716158209d3c24c8657efa02b2c31d"} Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.052326 4847 scope.go:117] "RemoveContainer" containerID="3d48adfb40645576d44e295308ca07c745dc85a559e180c16971f09c4d2cc672" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.052503 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d7bf495d-sp442" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.086765 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" event={"ID":"9ff37608-c71f-48aa-9205-8aae29841abb","Type":"ContainerDied","Data":"14163aa0a955a273d886277c35143ddf51b3bbef9030e7397f12b74d4a001c08"} Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.086862 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xm5hm" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.113767 4847 generic.go:334] "Generic (PLEG): container finished" podID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerID="cbb7fc726aaf64f862df797728b8e837eb204434fc78233e5b929f74f594f2f4" exitCode=0 Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.114004 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="cinder-scheduler" containerID="cri-o://47953f526fed4828233ef4a350bf871c0962232abcb2f7f3ece7325b5e00b206" gracePeriod=30 Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.114081 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerDied","Data":"cbb7fc726aaf64f862df797728b8e837eb204434fc78233e5b929f74f594f2f4"} Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.114510 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="probe" containerID="cri-o://07f34af9957a545eab1522b4d41d8f3f7498969870d2e1e94bf1b969165b1fb0" gracePeriod=30 Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.152952 4847 scope.go:117] "RemoveContainer" containerID="5b8f6a2ce95895255652bc5fe6164bb458de1e16b2b2234e29a39ca429fb3046" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.194926 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.229856 4847 scope.go:117] "RemoveContainer" containerID="66d57d39a883115ade6701d73afc8016bbeb70be832c3d63a87cb2c56d84697c" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.235209 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58d7bf495d-sp442"] Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.248207 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.259964 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xm5hm"] Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.274285 4847 scope.go:117] "RemoveContainer" containerID="c41d207ac2e62cb3235773bb4f3fa706c060b52735f066c84765da12d1182c40" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.292282 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.647441 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698454 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698519 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698724 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698795 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698828 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698855 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s77wj\" (UniqueName: \"kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.698878 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs\") pod \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\" (UID: \"87700b8a-be8c-46fe-a7a6-ec022ab8c87c\") " Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.706475 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.713363 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.720790 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj" (OuterVolumeSpecName: "kube-api-access-s77wj") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "kube-api-access-s77wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.789173 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.800815 4847 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.800847 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s77wj\" (UniqueName: \"kubernetes.io/projected/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-kube-api-access-s77wj\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.823847 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.834635 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.874100 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config" (OuterVolumeSpecName: "config") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.897396 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.903893 4847 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.903923 4847 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.903948 4847 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.903958 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:04 crc kubenswrapper[4847]: I0218 00:49:04.921780 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87700b8a-be8c-46fe-a7a6-ec022ab8c87c" (UID: "87700b8a-be8c-46fe-a7a6-ec022ab8c87c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.007419 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87700b8a-be8c-46fe-a7a6-ec022ab8c87c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.030639 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f4b564c84-4zd7z"] Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031257 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031276 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031300 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="dnsmasq-dns" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031309 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="dnsmasq-dns" Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031327 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api-log" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031333 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api-log" Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031345 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="init" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031351 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="init" Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031364 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-api" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031370 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-api" Feb 18 00:49:05 crc kubenswrapper[4847]: E0218 00:49:05.031380 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031386 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031625 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api-log" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031641 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" containerName="dnsmasq-dns" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031654 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-api" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031670 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" containerName="neutron-httpd" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.031681 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" containerName="barbican-api" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.032851 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.065828 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f4b564c84-4zd7z"] Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109408 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-config-data\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109509 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-public-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109622 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-internal-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109706 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-scripts\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109742 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-combined-ca-bundle\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109826 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwb8t\" (UniqueName: \"kubernetes.io/projected/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-kube-api-access-xwb8t\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.109876 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-logs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.199845 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67dc676569-x5csl" event={"ID":"87700b8a-be8c-46fe-a7a6-ec022ab8c87c","Type":"ContainerDied","Data":"5daf6102b87fb83ffc96f822875e82eaee9c45408add7713d3374d9680d391b6"} Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.199931 4847 scope.go:117] "RemoveContainer" containerID="98c3847fc17c2dcdb878f91215424724bd9eeef26f6a6c0ba24055626060a239" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.200100 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67dc676569-x5csl" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.212481 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwb8t\" (UniqueName: \"kubernetes.io/projected/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-kube-api-access-xwb8t\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.212816 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-logs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.212852 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-config-data\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.212880 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-public-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.212946 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-internal-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.213004 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-scripts\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.213023 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-combined-ca-bundle\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.213827 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-logs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.220425 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-internal-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.224644 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-config-data\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.225265 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-combined-ca-bundle\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.244034 4847 generic.go:334] "Generic (PLEG): container finished" podID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerID="69a8577d627203a73c61d97ae4418b00b25be3245036ee466f1f1ce5b6083dcb" exitCode=0 Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.244145 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerDied","Data":"69a8577d627203a73c61d97ae4418b00b25be3245036ee466f1f1ce5b6083dcb"} Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.244180 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerStarted","Data":"ab9c8b0b3aacef7f3fc59fb8d33a5b3e319b95472e0e487cd5ae670acfd911f6"} Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.246115 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-public-tls-certs\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.246273 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-scripts\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.305104 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwb8t\" (UniqueName: \"kubernetes.io/projected/31f442fe-cea0-4d0f-a39d-75b8648fbc3d-kube-api-access-xwb8t\") pod \"placement-f4b564c84-4zd7z\" (UID: \"31f442fe-cea0-4d0f-a39d-75b8648fbc3d\") " pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.305426 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerStarted","Data":"2db04098ac1e8ae8295c69754f9ead7e314877a9f37419fcc918274785c6a83c"} Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.339593 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7pbj" podStartSLOduration=3.72934233 podStartE2EDuration="7.339572262s" podCreationTimestamp="2026-02-18 00:48:58 +0000 UTC" firstStartedPulling="2026-02-18 00:49:00.890763956 +0000 UTC m=+1414.268114898" lastFinishedPulling="2026-02-18 00:49:04.500993888 +0000 UTC m=+1417.878344830" observedRunningTime="2026-02-18 00:49:05.338473765 +0000 UTC m=+1418.715824707" watchObservedRunningTime="2026-02-18 00:49:05.339572262 +0000 UTC m=+1418.716923204" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.368202 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.372129 4847 scope.go:117] "RemoveContainer" containerID="cbb7fc726aaf64f862df797728b8e837eb204434fc78233e5b929f74f594f2f4" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.486000 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015b3baa-45c2-4f4e-88d3-2aa917d3578c" path="/var/lib/kubelet/pods/015b3baa-45c2-4f4e-88d3-2aa917d3578c/volumes" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.487387 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff37608-c71f-48aa-9205-8aae29841abb" path="/var/lib/kubelet/pods/9ff37608-c71f-48aa-9205-8aae29841abb/volumes" Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.581716 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:49:05 crc kubenswrapper[4847]: I0218 00:49:05.591226 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67dc676569-x5csl"] Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.051178 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f4b564c84-4zd7z"] Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.327370 4847 generic.go:334] "Generic (PLEG): container finished" podID="836d884c-054f-4eb6-93ef-1d6361564b01" containerID="07f34af9957a545eab1522b4d41d8f3f7498969870d2e1e94bf1b969165b1fb0" exitCode=0 Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.327635 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerDied","Data":"07f34af9957a545eab1522b4d41d8f3f7498969870d2e1e94bf1b969165b1fb0"} Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.330568 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerStarted","Data":"0f58ab420489608caafa39ea69c3a0aaddee8e4825d1117e5f2cf0201544e294"} Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.335662 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f4b564c84-4zd7z" event={"ID":"31f442fe-cea0-4d0f-a39d-75b8648fbc3d","Type":"ContainerStarted","Data":"c98a1bbf2034d724927431093a66d6f353cfa334c1f795a52dc7b3abf6a6156d"} Feb 18 00:49:06 crc kubenswrapper[4847]: I0218 00:49:06.371202 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.347355 4847 generic.go:334] "Generic (PLEG): container finished" podID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerID="0f58ab420489608caafa39ea69c3a0aaddee8e4825d1117e5f2cf0201544e294" exitCode=0 Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.347431 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerDied","Data":"0f58ab420489608caafa39ea69c3a0aaddee8e4825d1117e5f2cf0201544e294"} Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.350543 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f4b564c84-4zd7z" event={"ID":"31f442fe-cea0-4d0f-a39d-75b8648fbc3d","Type":"ContainerStarted","Data":"d5934f92f3f4cf1e70fca42c4d4a2003a53528ddb568cb0a696ee87eb210fc2c"} Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.350585 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f4b564c84-4zd7z" event={"ID":"31f442fe-cea0-4d0f-a39d-75b8648fbc3d","Type":"ContainerStarted","Data":"ce58e600ac60948ed684bf11b51a679199e7db3c0c196fc99416987d29678549"} Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.350803 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.350856 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.398242 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f4b564c84-4zd7z" podStartSLOduration=3.398225807 podStartE2EDuration="3.398225807s" podCreationTimestamp="2026-02-18 00:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:07.392842003 +0000 UTC m=+1420.770192945" watchObservedRunningTime="2026-02-18 00:49:07.398225807 +0000 UTC m=+1420.775576749" Feb 18 00:49:07 crc kubenswrapper[4847]: I0218 00:49:07.418807 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87700b8a-be8c-46fe-a7a6-ec022ab8c87c" path="/var/lib/kubelet/pods/87700b8a-be8c-46fe-a7a6-ec022ab8c87c/volumes" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.374397 4847 generic.go:334] "Generic (PLEG): container finished" podID="836d884c-054f-4eb6-93ef-1d6361564b01" containerID="47953f526fed4828233ef4a350bf871c0962232abcb2f7f3ece7325b5e00b206" exitCode=0 Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.374570 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerDied","Data":"47953f526fed4828233ef4a350bf871c0962232abcb2f7f3ece7325b5e00b206"} Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.506586 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528594 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528651 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528687 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528781 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86tbz\" (UniqueName: \"kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528800 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.528821 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle\") pod \"836d884c-054f-4eb6-93ef-1d6361564b01\" (UID: \"836d884c-054f-4eb6-93ef-1d6361564b01\") " Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.531867 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.537988 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.566121 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz" (OuterVolumeSpecName: "kube-api-access-86tbz") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "kube-api-access-86tbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.566862 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts" (OuterVolumeSpecName: "scripts") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.635441 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.635484 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86tbz\" (UniqueName: \"kubernetes.io/projected/836d884c-054f-4eb6-93ef-1d6361564b01-kube-api-access-86tbz\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.635494 4847 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/836d884c-054f-4eb6-93ef-1d6361564b01-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.635506 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.676273 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.740320 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.755857 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data" (OuterVolumeSpecName: "config-data") pod "836d884c-054f-4eb6-93ef-1d6361564b01" (UID: "836d884c-054f-4eb6-93ef-1d6361564b01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.799171 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-747f4858ff-m9tz2" Feb 18 00:49:08 crc kubenswrapper[4847]: I0218 00:49:08.844188 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836d884c-054f-4eb6-93ef-1d6361564b01-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.079433 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.079493 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.385344 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerStarted","Data":"1b78bd10a12af759abac3c8817dd81449f68b073c349c83e4f3da78be501e958"} Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.387899 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"836d884c-054f-4eb6-93ef-1d6361564b01","Type":"ContainerDied","Data":"089e64e8108cd3d6270b30ecc5ddd6aaed15434361b6f4c937fe10fff8e2fbb8"} Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.387939 4847 scope.go:117] "RemoveContainer" containerID="07f34af9957a545eab1522b4d41d8f3f7498969870d2e1e94bf1b969165b1fb0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.387954 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.412314 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jt7tq" podStartSLOduration=3.600398723 podStartE2EDuration="6.412300738s" podCreationTimestamp="2026-02-18 00:49:03 +0000 UTC" firstStartedPulling="2026-02-18 00:49:05.256013081 +0000 UTC m=+1418.633364023" lastFinishedPulling="2026-02-18 00:49:08.067915096 +0000 UTC m=+1421.445266038" observedRunningTime="2026-02-18 00:49:09.409871457 +0000 UTC m=+1422.787222399" watchObservedRunningTime="2026-02-18 00:49:09.412300738 +0000 UTC m=+1422.789651680" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.421928 4847 scope.go:117] "RemoveContainer" containerID="47953f526fed4828233ef4a350bf871c0962232abcb2f7f3ece7325b5e00b206" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.472738 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.479367 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.489585 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:09 crc kubenswrapper[4847]: E0218 00:49:09.490063 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="cinder-scheduler" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.490074 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="cinder-scheduler" Feb 18 00:49:09 crc kubenswrapper[4847]: E0218 00:49:09.490087 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="probe" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.490095 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="probe" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.490296 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="cinder-scheduler" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.490310 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" containerName="probe" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.491387 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.495951 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.517068 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662383 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-scripts\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662460 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662550 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662585 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4837a634-0109-4735-80ad-a9cf74966812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662633 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/4837a634-0109-4735-80ad-a9cf74966812-kube-api-access-ld8rw\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.662681 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.764214 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.764621 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.764757 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4837a634-0109-4735-80ad-a9cf74966812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.764879 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4837a634-0109-4735-80ad-a9cf74966812-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.764898 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/4837a634-0109-4735-80ad-a9cf74966812-kube-api-access-ld8rw\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.765017 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.765112 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-scripts\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.769709 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.769983 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.770253 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-config-data\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.780951 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4837a634-0109-4735-80ad-a9cf74966812-scripts\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.785412 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8rw\" (UniqueName: \"kubernetes.io/projected/4837a634-0109-4735-80ad-a9cf74966812-kube-api-access-ld8rw\") pod \"cinder-scheduler-0\" (UID: \"4837a634-0109-4735-80ad-a9cf74966812\") " pod="openstack/cinder-scheduler-0" Feb 18 00:49:09 crc kubenswrapper[4847]: I0218 00:49:09.844527 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.139357 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s7pbj" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="registry-server" probeResult="failure" output=< Feb 18 00:49:10 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:49:10 crc kubenswrapper[4847]: > Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.392693 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.760367 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.771750 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.774615 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.774810 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.775062 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lbkll" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.784095 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.895728 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.895781 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.895849 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.895919 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvktc\" (UniqueName: \"kubernetes.io/projected/4f55a480-6f28-47f9-aa62-f21de18ff60e-kube-api-access-hvktc\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.998536 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.998958 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.999042 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvktc\" (UniqueName: \"kubernetes.io/projected/4f55a480-6f28-47f9-aa62-f21de18ff60e-kube-api-access-hvktc\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:10 crc kubenswrapper[4847]: I0218 00:49:10.999130 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.000504 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.011803 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-openstack-config-secret\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.012529 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f55a480-6f28-47f9-aa62-f21de18ff60e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.043843 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvktc\" (UniqueName: \"kubernetes.io/projected/4f55a480-6f28-47f9-aa62-f21de18ff60e-kube-api-access-hvktc\") pod \"openstackclient\" (UID: \"4f55a480-6f28-47f9-aa62-f21de18ff60e\") " pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.094153 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.425519 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836d884c-054f-4eb6-93ef-1d6361564b01" path="/var/lib/kubelet/pods/836d884c-054f-4eb6-93ef-1d6361564b01/volumes" Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.440753 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4837a634-0109-4735-80ad-a9cf74966812","Type":"ContainerStarted","Data":"fbd29a897e563044a5c632972a52cfd6c1e75c785c4cd3a7d1f63abc288332fb"} Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.440804 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4837a634-0109-4735-80ad-a9cf74966812","Type":"ContainerStarted","Data":"5202285d9e72f52e9a9f864be6ec7835be6b22cff5ac2ade07c57fc134ca1af6"} Feb 18 00:49:11 crc kubenswrapper[4847]: I0218 00:49:11.631239 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 18 00:49:11 crc kubenswrapper[4847]: W0218 00:49:11.662826 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f55a480_6f28_47f9_aa62_f21de18ff60e.slice/crio-8e4ebab50a2ad38042a8f03eb370947bd7fc1a5fe513508d38129279789d54b3 WatchSource:0}: Error finding container 8e4ebab50a2ad38042a8f03eb370947bd7fc1a5fe513508d38129279789d54b3: Status 404 returned error can't find the container with id 8e4ebab50a2ad38042a8f03eb370947bd7fc1a5fe513508d38129279789d54b3 Feb 18 00:49:12 crc kubenswrapper[4847]: I0218 00:49:12.458890 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4837a634-0109-4735-80ad-a9cf74966812","Type":"ContainerStarted","Data":"6679552321c6e1c2c72697e18d74f1293a82b3696194c08f7a93a626a5d0c03f"} Feb 18 00:49:12 crc kubenswrapper[4847]: I0218 00:49:12.462641 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f55a480-6f28-47f9-aa62-f21de18ff60e","Type":"ContainerStarted","Data":"8e4ebab50a2ad38042a8f03eb370947bd7fc1a5fe513508d38129279789d54b3"} Feb 18 00:49:12 crc kubenswrapper[4847]: I0218 00:49:12.489360 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.489338897 podStartE2EDuration="3.489338897s" podCreationTimestamp="2026-02-18 00:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:12.476942147 +0000 UTC m=+1425.854293099" watchObservedRunningTime="2026-02-18 00:49:12.489338897 +0000 UTC m=+1425.866689839" Feb 18 00:49:13 crc kubenswrapper[4847]: I0218 00:49:13.636769 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:13 crc kubenswrapper[4847]: I0218 00:49:13.636807 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:14 crc kubenswrapper[4847]: I0218 00:49:14.693446 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jt7tq" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="registry-server" probeResult="failure" output=< Feb 18 00:49:14 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:49:14 crc kubenswrapper[4847]: > Feb 18 00:49:14 crc kubenswrapper[4847]: I0218 00:49:14.844771 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.351420 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.352881 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.355387 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-dmgzf" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.356184 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.357479 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.466496 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.514001 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.515324 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.518810 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.525089 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.527493 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544183 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwxv\" (UniqueName: \"kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544237 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544278 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544317 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544341 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544377 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544540 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544884 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcb4\" (UniqueName: \"kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544929 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.544957 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.545040 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.545065 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.545085 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.545098 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqx47\" (UniqueName: \"kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.547543 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.559086 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.604768 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.606128 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.610235 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.620038 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.647948 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648124 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648176 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxcb4\" (UniqueName: \"kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648203 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648244 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648676 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648730 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tsb6\" (UniqueName: \"kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648815 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648844 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648873 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648899 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqx47\" (UniqueName: \"kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.648978 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649037 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtwxv\" (UniqueName: \"kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649084 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649127 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649191 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649223 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649281 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649324 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649333 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.649977 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.650310 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.651155 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.658035 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.658728 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.664448 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.665621 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.668006 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.669897 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtwxv\" (UniqueName: \"kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.670662 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom\") pod \"heat-cfnapi-79cfb99699-ctzx2\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.675619 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqx47\" (UniqueName: \"kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47\") pod \"dnsmasq-dns-688b9f5b49-sktjc\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.689682 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxcb4\" (UniqueName: \"kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4\") pod \"heat-engine-5b8c9bd889-lvxrd\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.757346 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.757424 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.757448 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tsb6\" (UniqueName: \"kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.757501 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.765417 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.782224 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.782820 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.787344 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tsb6\" (UniqueName: \"kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6\") pod \"heat-api-59794cdfcf-5hdcv\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.848462 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.863501 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.927561 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:15 crc kubenswrapper[4847]: I0218 00:49:15.975883 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:16 crc kubenswrapper[4847]: I0218 00:49:16.591417 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:16 crc kubenswrapper[4847]: W0218 00:49:16.603447 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7250520c_bcaf_4564_9155_8ecada7c6880.slice/crio-46c5f4faa67bb4bbb22bf81156bc1eaca3b40221230142cbfa6aaf2ee605428a WatchSource:0}: Error finding container 46c5f4faa67bb4bbb22bf81156bc1eaca3b40221230142cbfa6aaf2ee605428a: Status 404 returned error can't find the container with id 46c5f4faa67bb4bbb22bf81156bc1eaca3b40221230142cbfa6aaf2ee605428a Feb 18 00:49:16 crc kubenswrapper[4847]: I0218 00:49:16.608078 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:16 crc kubenswrapper[4847]: I0218 00:49:16.722543 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:49:16 crc kubenswrapper[4847]: W0218 00:49:16.738752 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd07eaadd_7b7d_4ccc_9ef3_ad7bdd2524b8.slice/crio-6cbc4bba92ec5be154bdc8d2f04f7b24b30c3745230592a998b6409a4e486610 WatchSource:0}: Error finding container 6cbc4bba92ec5be154bdc8d2f04f7b24b30c3745230592a998b6409a4e486610: Status 404 returned error can't find the container with id 6cbc4bba92ec5be154bdc8d2f04f7b24b30c3745230592a998b6409a4e486610 Feb 18 00:49:16 crc kubenswrapper[4847]: I0218 00:49:16.877370 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:16 crc kubenswrapper[4847]: W0218 00:49:16.887818 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e389a17_3e84_4ed5_b2f3_59b4c42f9a8e.slice/crio-5d117fc3b4d4dcfd9f7bc9678cb84b7b569918b81251026b539701341f651709 WatchSource:0}: Error finding container 5d117fc3b4d4dcfd9f7bc9678cb84b7b569918b81251026b539701341f651709: Status 404 returned error can't find the container with id 5d117fc3b4d4dcfd9f7bc9678cb84b7b569918b81251026b539701341f651709 Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.537254 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8c9bd889-lvxrd" event={"ID":"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e","Type":"ContainerStarted","Data":"eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.537584 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8c9bd889-lvxrd" event={"ID":"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e","Type":"ContainerStarted","Data":"5d117fc3b4d4dcfd9f7bc9678cb84b7b569918b81251026b539701341f651709"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.537725 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.539522 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" event={"ID":"16f1c8da-07de-457e-a7f4-a16db587196b","Type":"ContainerStarted","Data":"47927807051acc03b76065ba0ea8e030a1e643a8b77b3e91b62a5be3d82281f5"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.543294 4847 generic.go:334] "Generic (PLEG): container finished" podID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerID="dc0230118c6b93a18a14c9cdeb9bb9b77f11d1084f4281c68b6ddfdb22b82bca" exitCode=0 Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.543355 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" event={"ID":"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8","Type":"ContainerDied","Data":"dc0230118c6b93a18a14c9cdeb9bb9b77f11d1084f4281c68b6ddfdb22b82bca"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.543374 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" event={"ID":"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8","Type":"ContainerStarted","Data":"6cbc4bba92ec5be154bdc8d2f04f7b24b30c3745230592a998b6409a4e486610"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.546483 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59794cdfcf-5hdcv" event={"ID":"7250520c-bcaf-4564-9155-8ecada7c6880","Type":"ContainerStarted","Data":"46c5f4faa67bb4bbb22bf81156bc1eaca3b40221230142cbfa6aaf2ee605428a"} Feb 18 00:49:17 crc kubenswrapper[4847]: I0218 00:49:17.564221 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b8c9bd889-lvxrd" podStartSLOduration=2.564201069 podStartE2EDuration="2.564201069s" podCreationTimestamp="2026-02-18 00:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:17.558482806 +0000 UTC m=+1430.935833768" watchObservedRunningTime="2026-02-18 00:49:17.564201069 +0000 UTC m=+1430.941552011" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.838699 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7df4cf8969-f69sk"] Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.840903 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.847127 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.847205 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.847303 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.859871 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7df4cf8969-f69sk"] Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.967345 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-public-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.967434 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-config-data\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.967729 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-internal-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.967883 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-combined-ca-bundle\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.967969 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-etc-swift\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.968087 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l88k\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-kube-api-access-4l88k\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.968168 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-log-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:18 crc kubenswrapper[4847]: I0218 00:49:18.968327 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-run-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.070695 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l88k\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-kube-api-access-4l88k\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071134 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-log-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071206 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-run-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071249 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-public-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071328 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-config-data\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071377 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-internal-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071420 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-combined-ca-bundle\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071458 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-etc-swift\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071564 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-log-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.071731 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-run-httpd\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.079594 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-etc-swift\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.080531 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-public-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.081414 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-combined-ca-bundle\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.082631 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-internal-tls-certs\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.097915 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l88k\" (UniqueName: \"kubernetes.io/projected/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-kube-api-access-4l88k\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.110553 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30cfe0d1-2602-42ae-b1b3-3f4e562c13c6-config-data\") pod \"swift-proxy-7df4cf8969-f69sk\" (UID: \"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6\") " pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.163024 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.174142 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.230004 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:19 crc kubenswrapper[4847]: I0218 00:49:19.424243 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:49:20 crc kubenswrapper[4847]: I0218 00:49:20.110358 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 18 00:49:20 crc kubenswrapper[4847]: I0218 00:49:20.617968 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7pbj" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="registry-server" containerID="cri-o://2db04098ac1e8ae8295c69754f9ead7e314877a9f37419fcc918274785c6a83c" gracePeriod=2 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.496103 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.497259 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-central-agent" containerID="cri-o://f19c9fb397bc77a646bef92be811569a6ac42633a72a017f03ab0854c9a4e2d1" gracePeriod=30 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.497360 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-notification-agent" containerID="cri-o://0c7f17ecf6ee8c806ed10be9a55ccd029260d8a7c3ad803972003c3758d9d6bb" gracePeriod=30 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.497302 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="proxy-httpd" containerID="cri-o://ec39ee5f634c877e26b18d94affe0edad0d99af50d157477f2a6dd96452cb94b" gracePeriod=30 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.497319 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="sg-core" containerID="cri-o://564465d8075ee018b487799132a846e25de7acb794f1c7fc38c273a5d7ea0862" gracePeriod=30 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.501843 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.660034 4847 generic.go:334] "Generic (PLEG): container finished" podID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerID="2db04098ac1e8ae8295c69754f9ead7e314877a9f37419fcc918274785c6a83c" exitCode=0 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.660159 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerDied","Data":"2db04098ac1e8ae8295c69754f9ead7e314877a9f37419fcc918274785c6a83c"} Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.664931 4847 generic.go:334] "Generic (PLEG): container finished" podID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerID="564465d8075ee018b487799132a846e25de7acb794f1c7fc38c273a5d7ea0862" exitCode=2 Feb 18 00:49:21 crc kubenswrapper[4847]: I0218 00:49:21.664994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerDied","Data":"564465d8075ee018b487799132a846e25de7acb794f1c7fc38c273a5d7ea0862"} Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682179 4847 generic.go:334] "Generic (PLEG): container finished" podID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerID="ec39ee5f634c877e26b18d94affe0edad0d99af50d157477f2a6dd96452cb94b" exitCode=0 Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682217 4847 generic.go:334] "Generic (PLEG): container finished" podID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerID="0c7f17ecf6ee8c806ed10be9a55ccd029260d8a7c3ad803972003c3758d9d6bb" exitCode=0 Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682229 4847 generic.go:334] "Generic (PLEG): container finished" podID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerID="f19c9fb397bc77a646bef92be811569a6ac42633a72a017f03ab0854c9a4e2d1" exitCode=0 Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682252 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerDied","Data":"ec39ee5f634c877e26b18d94affe0edad0d99af50d157477f2a6dd96452cb94b"} Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682285 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerDied","Data":"0c7f17ecf6ee8c806ed10be9a55ccd029260d8a7c3ad803972003c3758d9d6bb"} Feb 18 00:49:22 crc kubenswrapper[4847]: I0218 00:49:22.682299 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerDied","Data":"f19c9fb397bc77a646bef92be811569a6ac42633a72a017f03ab0854c9a4e2d1"} Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.491308 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.491852 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.714468 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.846509 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.940773 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-67b9f7bd8b-phnps"] Feb 18 00:49:23 crc kubenswrapper[4847]: I0218 00:49:23.942114 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.070171 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.071978 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.097879 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data-custom\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.098304 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-combined-ca-bundle\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.098353 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9c8\" (UniqueName: \"kubernetes.io/projected/67a5eed6-fda8-4fca-bd98-6bcb2270d646-kube-api-access-wk9c8\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.098516 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.103629 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67b9f7bd8b-phnps"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.114135 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.115576 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.134538 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.144951 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.154824 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200444 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data-custom\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200517 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200562 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-combined-ca-bundle\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200594 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200626 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4jc\" (UniqueName: \"kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200648 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9c8\" (UniqueName: \"kubernetes.io/projected/67a5eed6-fda8-4fca-bd98-6bcb2270d646-kube-api-access-wk9c8\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200684 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.200714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.211975 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-combined-ca-bundle\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.212612 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data-custom\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.213475 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a5eed6-fda8-4fca-bd98-6bcb2270d646-config-data\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.220575 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9c8\" (UniqueName: \"kubernetes.io/projected/67a5eed6-fda8-4fca-bd98-6bcb2270d646-kube-api-access-wk9c8\") pod \"heat-engine-67b9f7bd8b-phnps\" (UID: \"67a5eed6-fda8-4fca-bd98-6bcb2270d646\") " pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.285185 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.303006 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.303274 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jbj7\" (UniqueName: \"kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.303362 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.303417 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.303486 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.304115 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.304213 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz4jc\" (UniqueName: \"kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.304317 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.309018 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.310057 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.311348 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.323464 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz4jc\" (UniqueName: \"kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc\") pod \"heat-cfnapi-58b87f4965-bck5v\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.397706 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.406863 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.407013 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.407118 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jbj7\" (UniqueName: \"kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.407145 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.413508 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.426718 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.445528 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.460235 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jbj7\" (UniqueName: \"kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7\") pod \"heat-api-5d8485fcfd-qf9k4\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:24 crc kubenswrapper[4847]: I0218 00:49:24.744331 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.288510 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.305769 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.326441 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-75d56c557b-p6pn6"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.327891 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.330670 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.330799 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.340047 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5fd77b47d6-ms5hf"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.341411 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.346283 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.346694 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.349134 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75d56c557b-p6pn6"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.386644 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fd77b47d6-ms5hf"] Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442390 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgwbl\" (UniqueName: \"kubernetes.io/projected/724e605e-6796-4384-8832-ab9bcec6a585-kube-api-access-kgwbl\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442440 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5x86\" (UniqueName: \"kubernetes.io/projected/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-kube-api-access-f5x86\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442483 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-combined-ca-bundle\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442505 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-public-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442529 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442563 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-public-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442582 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-internal-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442611 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-internal-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442644 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data-custom\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442659 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-combined-ca-bundle\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442723 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.442770 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data-custom\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.450922 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.191:3000/\": dial tcp 10.217.0.191:3000: connect: connection refused" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.544628 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgwbl\" (UniqueName: \"kubernetes.io/projected/724e605e-6796-4384-8832-ab9bcec6a585-kube-api-access-kgwbl\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.544921 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5x86\" (UniqueName: \"kubernetes.io/projected/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-kube-api-access-f5x86\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.544961 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-combined-ca-bundle\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.544983 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-public-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545007 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545039 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-public-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545060 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-internal-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545079 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-internal-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545099 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-combined-ca-bundle\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.545970 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data-custom\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.546063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.546126 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data-custom\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.553225 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-combined-ca-bundle\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.555783 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-internal-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.558356 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data-custom\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.558450 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-public-tls-certs\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.558782 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data-custom\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.559240 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-config-data\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.561388 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-combined-ca-bundle\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.561401 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-public-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.563335 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-config-data\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.564943 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/724e605e-6796-4384-8832-ab9bcec6a585-internal-tls-certs\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.566690 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgwbl\" (UniqueName: \"kubernetes.io/projected/724e605e-6796-4384-8832-ab9bcec6a585-kube-api-access-kgwbl\") pod \"heat-cfnapi-5fd77b47d6-ms5hf\" (UID: \"724e605e-6796-4384-8832-ab9bcec6a585\") " pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.569949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5x86\" (UniqueName: \"kubernetes.io/projected/1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21-kube-api-access-f5x86\") pod \"heat-api-75d56c557b-p6pn6\" (UID: \"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21\") " pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.675944 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.682662 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:25 crc kubenswrapper[4847]: I0218 00:49:25.721793 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jt7tq" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="registry-server" containerID="cri-o://1b78bd10a12af759abac3c8817dd81449f68b073c349c83e4f3da78be501e958" gracePeriod=2 Feb 18 00:49:26 crc kubenswrapper[4847]: I0218 00:49:26.739767 4847 generic.go:334] "Generic (PLEG): container finished" podID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerID="c2505a71abb7818cc5fd08322fc690f6d6a1d58559dc3464a68aa9d724835339" exitCode=137 Feb 18 00:49:26 crc kubenswrapper[4847]: I0218 00:49:26.739866 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerDied","Data":"c2505a71abb7818cc5fd08322fc690f6d6a1d58559dc3464a68aa9d724835339"} Feb 18 00:49:26 crc kubenswrapper[4847]: I0218 00:49:26.745109 4847 generic.go:334] "Generic (PLEG): container finished" podID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerID="1b78bd10a12af759abac3c8817dd81449f68b073c349c83e4f3da78be501e958" exitCode=0 Feb 18 00:49:26 crc kubenswrapper[4847]: I0218 00:49:26.745161 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerDied","Data":"1b78bd10a12af759abac3c8817dd81449f68b073c349c83e4f3da78be501e958"} Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.347126 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.393816 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities\") pod \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.394079 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content\") pod \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.394124 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsqzb\" (UniqueName: \"kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb\") pod \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\" (UID: \"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.394254 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities" (OuterVolumeSpecName: "utilities") pod "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" (UID: "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.399168 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.400831 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb" (OuterVolumeSpecName: "kube-api-access-qsqzb") pod "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" (UID: "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1"). InnerVolumeSpecName "kube-api-access-qsqzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.502718 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsqzb\" (UniqueName: \"kubernetes.io/projected/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-kube-api-access-qsqzb\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.644964 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" (UID: "8ec71ab6-3f40-4239-b6b7-db48ce3aaca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.710265 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.726190 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.857407 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7pbj" event={"ID":"8ec71ab6-3f40-4239-b6b7-db48ce3aaca1","Type":"ContainerDied","Data":"9a9c80be7ca4d1eb9f517510e1aae5cb55d5dffdce4014ca750a9b3d85d0eac9"} Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.857456 4847 scope.go:117] "RemoveContainer" containerID="2db04098ac1e8ae8295c69754f9ead7e314877a9f37419fcc918274785c6a83c" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.857827 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7pbj" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.861310 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.864913 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jt7tq" event={"ID":"de202952-4ed4-4cc5-8eb4-1d167600a639","Type":"ContainerDied","Data":"ab9c8b0b3aacef7f3fc59fb8d33a5b3e319b95472e0e487cd5ae670acfd911f6"} Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.865278 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jt7tq" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.868216 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.879711 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.916561 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content\") pod \"de202952-4ed4-4cc5-8eb4-1d167600a639\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.916768 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities\") pod \"de202952-4ed4-4cc5-8eb4-1d167600a639\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.916841 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpvc7\" (UniqueName: \"kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7\") pod \"de202952-4ed4-4cc5-8eb4-1d167600a639\" (UID: \"de202952-4ed4-4cc5-8eb4-1d167600a639\") " Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.926070 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities" (OuterVolumeSpecName: "utilities") pod "de202952-4ed4-4cc5-8eb4-1d167600a639" (UID: "de202952-4ed4-4cc5-8eb4-1d167600a639"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.950719 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7" (OuterVolumeSpecName: "kube-api-access-dpvc7") pod "de202952-4ed4-4cc5-8eb4-1d167600a639" (UID: "de202952-4ed4-4cc5-8eb4-1d167600a639"). InnerVolumeSpecName "kube-api-access-dpvc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:27 crc kubenswrapper[4847]: I0218 00:49:27.974834 4847 scope.go:117] "RemoveContainer" containerID="398cc9552940ace5c4e55575e374e53d5d203d21a5704b88eac8d9dd803d35e2" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.003617 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" podStartSLOduration=13.003580383 podStartE2EDuration="13.003580383s" podCreationTimestamp="2026-02-18 00:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:27.945866959 +0000 UTC m=+1441.323217901" watchObservedRunningTime="2026-02-18 00:49:28.003580383 +0000 UTC m=+1441.380931325" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.014934 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.018788 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.018831 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.018864 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr7lz\" (UniqueName: \"kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.018955 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.018972 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mwkw\" (UniqueName: \"kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019067 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019094 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019116 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019136 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019156 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019181 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019206 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019237 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id\") pod \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\" (UID: \"40db5dc9-34a9-467b-9617-56ee9fc2d7e0\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.019255 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle\") pod \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\" (UID: \"b65fe68c-6cd7-4a94-8d02-1c84419628d5\") " Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.020074 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.020089 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpvc7\" (UniqueName: \"kubernetes.io/projected/de202952-4ed4-4cc5-8eb4-1d167600a639-kube-api-access-dpvc7\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.022462 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.028171 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7pbj"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.028199 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts" (OuterVolumeSpecName: "scripts") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.028468 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.028996 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.029093 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.029291 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts" (OuterVolumeSpecName: "scripts") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.032155 4847 scope.go:117] "RemoveContainer" containerID="4f8f3d5269e21eee2a0effe033a8dfe0bec0cdc52888ef4c27679ff4128f5235" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.032419 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs" (OuterVolumeSpecName: "logs") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.041726 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz" (OuterVolumeSpecName: "kube-api-access-qr7lz") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "kube-api-access-qr7lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.059008 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw" (OuterVolumeSpecName: "kube-api-access-9mwkw") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "kube-api-access-9mwkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.075191 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de202952-4ed4-4cc5-8eb4-1d167600a639" (UID: "de202952-4ed4-4cc5-8eb4-1d167600a639"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.085220 4847 scope.go:117] "RemoveContainer" containerID="1b78bd10a12af759abac3c8817dd81449f68b073c349c83e4f3da78be501e958" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.121945 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr7lz\" (UniqueName: \"kubernetes.io/projected/b65fe68c-6cd7-4a94-8d02-1c84419628d5-kube-api-access-qr7lz\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.121970 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.121982 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mwkw\" (UniqueName: \"kubernetes.io/projected/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-kube-api-access-9mwkw\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.121991 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de202952-4ed4-4cc5-8eb4-1d167600a639-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122000 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122008 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122017 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122026 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122034 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b65fe68c-6cd7-4a94-8d02-1c84419628d5-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.122042 4847 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.174667 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.214161 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.224094 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.270462 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data" (OuterVolumeSpecName: "config-data") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.328957 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.332168 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bf6d8bf75-gfz9n" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.338619 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40db5dc9-34a9-467b-9617-56ee9fc2d7e0" (UID: "40db5dc9-34a9-467b-9617-56ee9fc2d7e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.371503 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.426490 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.427090 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5857d66f7d-gqg2m" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-api" containerID="cri-o://643a355cc4288a509bc6c4144ab495e8828a61cf1fe162f22092013c465f4281" gracePeriod=30 Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.427649 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5857d66f7d-gqg2m" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-httpd" containerID="cri-o://4f49aba9c883fc0dffc7b09f488580c619196525f4257b14613b0e8caa3ab209" gracePeriod=30 Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.431395 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.431420 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40db5dc9-34a9-467b-9617-56ee9fc2d7e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.444106 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-75d56c557b-p6pn6"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.497850 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.526794 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7df4cf8969-f69sk"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.536737 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data" (OuterVolumeSpecName: "config-data") pod "b65fe68c-6cd7-4a94-8d02-1c84419628d5" (UID: "b65fe68c-6cd7-4a94-8d02-1c84419628d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.576716 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.588958 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jt7tq"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.613702 4847 scope.go:117] "RemoveContainer" containerID="0f58ab420489608caafa39ea69c3a0aaddee8e4825d1117e5f2cf0201544e294" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.642753 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b65fe68c-6cd7-4a94-8d02-1c84419628d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.811047 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5fd77b47d6-ms5hf"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.831833 4847 scope.go:117] "RemoveContainer" containerID="69a8577d627203a73c61d97ae4418b00b25be3245036ee466f1f1ce5b6083dcb" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.845407 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67b9f7bd8b-phnps"] Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.970426 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" event={"ID":"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8","Type":"ContainerStarted","Data":"d8e20d646a76c0687b92520cc134cc503412bb9bf8e42234dc22777cec414cc8"} Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.976406 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59794cdfcf-5hdcv" event={"ID":"7250520c-bcaf-4564-9155-8ecada7c6880","Type":"ContainerStarted","Data":"5880016d5ff0b9d563c1e2e80c6087e3ada7a2bfaeee8e08fcef0d94f3600ccd"} Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.976741 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-59794cdfcf-5hdcv" podUID="7250520c-bcaf-4564-9155-8ecada7c6880" containerName="heat-api" containerID="cri-o://5880016d5ff0b9d563c1e2e80c6087e3ada7a2bfaeee8e08fcef0d94f3600ccd" gracePeriod=60 Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.976922 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.990065 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4f55a480-6f28-47f9-aa62-f21de18ff60e","Type":"ContainerStarted","Data":"331cfff689563a5ea12256a30f84a33c39b9b7cf62f48dc8a0c98d97551afef5"} Feb 18 00:49:28 crc kubenswrapper[4847]: I0218 00:49:28.999088 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-59794cdfcf-5hdcv" podStartSLOduration=3.120384818 podStartE2EDuration="13.999072725s" podCreationTimestamp="2026-02-18 00:49:15 +0000 UTC" firstStartedPulling="2026-02-18 00:49:16.610227148 +0000 UTC m=+1429.987578090" lastFinishedPulling="2026-02-18 00:49:27.488915055 +0000 UTC m=+1440.866265997" observedRunningTime="2026-02-18 00:49:28.994516391 +0000 UTC m=+1442.371867333" watchObservedRunningTime="2026-02-18 00:49:28.999072725 +0000 UTC m=+1442.376423667" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.021557 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.022486 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b65fe68c-6cd7-4a94-8d02-1c84419628d5","Type":"ContainerDied","Data":"5ef17493e07fa65fe8f3be3ee75f8fc508bfe475d71bea4c4df84719901d6f80"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.024396 4847 scope.go:117] "RemoveContainer" containerID="ec39ee5f634c877e26b18d94affe0edad0d99af50d157477f2a6dd96452cb94b" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.029860 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.2553733559999998 podStartE2EDuration="19.029834655s" podCreationTimestamp="2026-02-18 00:49:10 +0000 UTC" firstStartedPulling="2026-02-18 00:49:11.674501987 +0000 UTC m=+1425.051852929" lastFinishedPulling="2026-02-18 00:49:27.448963286 +0000 UTC m=+1440.826314228" observedRunningTime="2026-02-18 00:49:29.013576408 +0000 UTC m=+1442.390927350" watchObservedRunningTime="2026-02-18 00:49:29.029834655 +0000 UTC m=+1442.407185597" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.030649 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d8485fcfd-qf9k4" event={"ID":"04eb603b-ceea-4448-98ee-bc1db325756e","Type":"ContainerStarted","Data":"dd6b3e9122790e32b74ea3513a2806ca1077f9d0aa1958fcd25db8068be98606"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.030762 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d8485fcfd-qf9k4" event={"ID":"04eb603b-ceea-4448-98ee-bc1db325756e","Type":"ContainerStarted","Data":"7b1778f12e3fe3506fd7412f530ec38b6fb1b57ca1bd0d79eb111462b306da42"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.030945 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.034429 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"40db5dc9-34a9-467b-9617-56ee9fc2d7e0","Type":"ContainerDied","Data":"b23095b7a3603248482da7464c08be77961c3150ae02f66c700facb4a510fc2b"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.034576 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.037667 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75d56c557b-p6pn6" event={"ID":"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21","Type":"ContainerStarted","Data":"2633ff2a945f0f932ed8e1ed432d0047e6fa0664e3f085fb92b8fe3865f2777b"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.039379 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.054945 4847 generic.go:334] "Generic (PLEG): container finished" podID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerID="4f49aba9c883fc0dffc7b09f488580c619196525f4257b14613b0e8caa3ab209" exitCode=0 Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.055048 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerDied","Data":"4f49aba9c883fc0dffc7b09f488580c619196525f4257b14613b0e8caa3ab209"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.062379 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5d8485fcfd-qf9k4" podStartSLOduration=6.062356198 podStartE2EDuration="6.062356198s" podCreationTimestamp="2026-02-18 00:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:29.055325002 +0000 UTC m=+1442.432675934" watchObservedRunningTime="2026-02-18 00:49:29.062356198 +0000 UTC m=+1442.439707130" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.063041 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7df4cf8969-f69sk" event={"ID":"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6","Type":"ContainerStarted","Data":"0deceaeb04f1641147994e5e64e70ee587aa12da60fc93f6db7474cff61c3c1d"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.090244 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67b9f7bd8b-phnps" event={"ID":"67a5eed6-fda8-4fca-bd98-6bcb2270d646","Type":"ContainerStarted","Data":"342e4ffcc72070004aae75b38b54cbce71375ad7578ac9253fdf4ec8fe9f0e3d"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.110943 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" event={"ID":"724e605e-6796-4384-8832-ab9bcec6a585","Type":"ContainerStarted","Data":"604aec4a6b75c9cc2d2e3266addd19cb20e7373a0d9dc86bc002f7d3d2042a50"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.138301 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.142268 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" podUID="16f1c8da-07de-457e-a7f4-a16db587196b" containerName="heat-cfnapi" containerID="cri-o://f2de1251e50ab11e24d78c937ef4ddaa207cbf2710a76bc4ac6ef167d01c33e0" gracePeriod=60 Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.142549 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" event={"ID":"16f1c8da-07de-457e-a7f4-a16db587196b","Type":"ContainerStarted","Data":"f2de1251e50ab11e24d78c937ef4ddaa207cbf2710a76bc4ac6ef167d01c33e0"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.142591 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.153761 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b87f4965-bck5v" event={"ID":"0198073d-b902-4914-a519-0c9ec3aed4eb","Type":"ContainerStarted","Data":"0c294f14912bfa7e14460ac41736c66070f90f76f73cd7a3bc8a84326dfdd1c6"} Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.170300 4847 scope.go:117] "RemoveContainer" containerID="564465d8075ee018b487799132a846e25de7acb794f1c7fc38c273a5d7ea0862" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.176524 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.201711 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202287 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="extract-utilities" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202301 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="extract-utilities" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202323 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202330 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202347 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202353 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202370 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="extract-content" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202377 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="extract-content" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202394 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="extract-content" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202400 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="extract-content" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202412 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api-log" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202417 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api-log" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202430 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="proxy-httpd" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202436 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="proxy-httpd" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202442 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="extract-utilities" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202448 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="extract-utilities" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202458 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-notification-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202464 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-notification-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202473 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="sg-core" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202478 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="sg-core" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202488 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-central-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202495 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-central-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: E0218 00:49:29.202504 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202509 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202730 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202745 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="proxy-httpd" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202753 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-central-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202762 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="sg-core" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202772 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" containerName="registry-server" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202785 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202797 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api-log" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.202805 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" containerName="ceilometer-notification-agent" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.205879 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.210080 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.210316 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.215561 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-75d56c557b-p6pn6" podStartSLOduration=4.215543102 podStartE2EDuration="4.215543102s" podCreationTimestamp="2026-02-18 00:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:29.141074148 +0000 UTC m=+1442.518425100" watchObservedRunningTime="2026-02-18 00:49:29.215543102 +0000 UTC m=+1442.592894034" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.216658 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.283769 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" podStartSLOduration=3.462407446 podStartE2EDuration="14.283746048s" podCreationTimestamp="2026-02-18 00:49:15 +0000 UTC" firstStartedPulling="2026-02-18 00:49:16.618733151 +0000 UTC m=+1429.996084093" lastFinishedPulling="2026-02-18 00:49:27.440071753 +0000 UTC m=+1440.817422695" observedRunningTime="2026-02-18 00:49:29.168520215 +0000 UTC m=+1442.545871157" watchObservedRunningTime="2026-02-18 00:49:29.283746048 +0000 UTC m=+1442.661096990" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.335937 4847 scope.go:117] "RemoveContainer" containerID="0c7f17ecf6ee8c806ed10be9a55ccd029260d8a7c3ad803972003c3758d9d6bb" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.354938 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.367695 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368056 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368125 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp2c4\" (UniqueName: \"kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368151 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368178 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368401 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368489 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.368535 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.392580 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.394817 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.398703 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.398910 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.399026 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.402132 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.402758 4847 scope.go:117] "RemoveContainer" containerID="f19c9fb397bc77a646bef92be811569a6ac42633a72a017f03ab0854c9a4e2d1" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.438271 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" path="/var/lib/kubelet/pods/40db5dc9-34a9-467b-9617-56ee9fc2d7e0/volumes" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.440072 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec71ab6-3f40-4239-b6b7-db48ce3aaca1" path="/var/lib/kubelet/pods/8ec71ab6-3f40-4239-b6b7-db48ce3aaca1/volumes" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.440898 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b65fe68c-6cd7-4a94-8d02-1c84419628d5" path="/var/lib/kubelet/pods/b65fe68c-6cd7-4a94-8d02-1c84419628d5/volumes" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.452426 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de202952-4ed4-4cc5-8eb4-1d167600a639" path="/var/lib/kubelet/pods/de202952-4ed4-4cc5-8eb4-1d167600a639/volumes" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480228 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480280 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480319 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480359 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1d368a-d1df-4e38-b82a-7cd8911050cc-logs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480428 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1d368a-d1df-4e38-b82a-7cd8911050cc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480443 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480469 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480494 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480528 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480586 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480648 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj97n\" (UniqueName: \"kubernetes.io/projected/6f1d368a-d1df-4e38-b82a-7cd8911050cc-kube-api-access-nj97n\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480706 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480741 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480775 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480811 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-scripts\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.480861 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp2c4\" (UniqueName: \"kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.485810 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.489666 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.494318 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.494482 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" containerName="kube-state-metrics" containerID="cri-o://fb5591ed215956a552a616c15a648d98d59faa59c1ad578b2f4c6e631afa20ea" gracePeriod=30 Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.497266 4847 scope.go:117] "RemoveContainer" containerID="c2505a71abb7818cc5fd08322fc690f6d6a1d58559dc3464a68aa9d724835339" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.516911 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.522101 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.526290 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp2c4\" (UniqueName: \"kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.527697 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.565268 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.568279 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.568469 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="81f5a395-7a57-4aac-9c38-35207716eb18" containerName="mysqld-exporter" containerID="cri-o://a424ae1212c82519dc92eda5fe818e5dd7409135ce09d1fa203fd98a9bde5015" gracePeriod=30 Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584060 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584149 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-scripts\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584200 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584268 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1d368a-d1df-4e38-b82a-7cd8911050cc-logs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584340 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1d368a-d1df-4e38-b82a-7cd8911050cc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584353 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584383 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584452 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj97n\" (UniqueName: \"kubernetes.io/projected/6f1d368a-d1df-4e38-b82a-7cd8911050cc-kube-api-access-nj97n\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.584494 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.591300 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f1d368a-d1df-4e38-b82a-7cd8911050cc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.592205 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f1d368a-d1df-4e38-b82a-7cd8911050cc-logs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.644829 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.653173 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-scripts\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.659571 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.665934 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.697685 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.702092 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data-custom\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.787564 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f1d368a-d1df-4e38-b82a-7cd8911050cc-config-data\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.811737 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj97n\" (UniqueName: \"kubernetes.io/projected/6f1d368a-d1df-4e38-b82a-7cd8911050cc-kube-api-access-nj97n\") pod \"cinder-api-0\" (UID: \"6f1d368a-d1df-4e38-b82a-7cd8911050cc\") " pod="openstack/cinder-api-0" Feb 18 00:49:29 crc kubenswrapper[4847]: I0218 00:49:29.907816 4847 scope.go:117] "RemoveContainer" containerID="003c966fbf199b63c19450634eb83278a298cea276426ff78c3165aa9057e396" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.026385 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.182238 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-75d56c557b-p6pn6" event={"ID":"1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21","Type":"ContainerStarted","Data":"e0018bef6d8243ff73a654b7895a29ca7d150daeeae86325de93aa42cd2af9d5"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.222905 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67b9f7bd8b-phnps" event={"ID":"67a5eed6-fda8-4fca-bd98-6bcb2270d646","Type":"ContainerStarted","Data":"f2fd0399238bbb5246206a319d7933332f8c13a87e73c9541a460cf0e4502ad2"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.223449 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.234730 4847 generic.go:334] "Generic (PLEG): container finished" podID="81f5a395-7a57-4aac-9c38-35207716eb18" containerID="a424ae1212c82519dc92eda5fe818e5dd7409135ce09d1fa203fd98a9bde5015" exitCode=2 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.234796 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"81f5a395-7a57-4aac-9c38-35207716eb18","Type":"ContainerDied","Data":"a424ae1212c82519dc92eda5fe818e5dd7409135ce09d1fa203fd98a9bde5015"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.245925 4847 generic.go:334] "Generic (PLEG): container finished" podID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" containerID="fb5591ed215956a552a616c15a648d98d59faa59c1ad578b2f4c6e631afa20ea" exitCode=2 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.245994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0375fa1c-b349-44b5-8ba6-1d1afe1715ce","Type":"ContainerDied","Data":"fb5591ed215956a552a616c15a648d98d59faa59c1ad578b2f4c6e631afa20ea"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.265289 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-67b9f7bd8b-phnps" podStartSLOduration=7.26526884 podStartE2EDuration="7.26526884s" podCreationTimestamp="2026-02-18 00:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:30.241306621 +0000 UTC m=+1443.618657563" watchObservedRunningTime="2026-02-18 00:49:30.26526884 +0000 UTC m=+1443.642619782" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.266438 4847 generic.go:334] "Generic (PLEG): container finished" podID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerID="bf492a26b49bf95664eee5b06a8467c368423d298bb2c5931c11187282828860" exitCode=1 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.266524 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b87f4965-bck5v" event={"ID":"0198073d-b902-4914-a519-0c9ec3aed4eb","Type":"ContainerDied","Data":"bf492a26b49bf95664eee5b06a8467c368423d298bb2c5931c11187282828860"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.272083 4847 scope.go:117] "RemoveContainer" containerID="bf492a26b49bf95664eee5b06a8467c368423d298bb2c5931c11187282828860" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.286834 4847 generic.go:334] "Generic (PLEG): container finished" podID="7250520c-bcaf-4564-9155-8ecada7c6880" containerID="5880016d5ff0b9d563c1e2e80c6087e3ada7a2bfaeee8e08fcef0d94f3600ccd" exitCode=0 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.286908 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59794cdfcf-5hdcv" event={"ID":"7250520c-bcaf-4564-9155-8ecada7c6880","Type":"ContainerDied","Data":"5880016d5ff0b9d563c1e2e80c6087e3ada7a2bfaeee8e08fcef0d94f3600ccd"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.301127 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7df4cf8969-f69sk" event={"ID":"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6","Type":"ContainerStarted","Data":"5f58f1847bd2dd5f16ad0d2175162e361143604b6a44a658fd0d65fb456436d9"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.312845 4847 generic.go:334] "Generic (PLEG): container finished" podID="16f1c8da-07de-457e-a7f4-a16db587196b" containerID="f2de1251e50ab11e24d78c937ef4ddaa207cbf2710a76bc4ac6ef167d01c33e0" exitCode=0 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.312944 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" event={"ID":"16f1c8da-07de-457e-a7f4-a16db587196b","Type":"ContainerDied","Data":"f2de1251e50ab11e24d78c937ef4ddaa207cbf2710a76bc4ac6ef167d01c33e0"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.334932 4847 generic.go:334] "Generic (PLEG): container finished" podID="04eb603b-ceea-4448-98ee-bc1db325756e" containerID="dd6b3e9122790e32b74ea3513a2806ca1077f9d0aa1958fcd25db8068be98606" exitCode=1 Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.335260 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d8485fcfd-qf9k4" event={"ID":"04eb603b-ceea-4448-98ee-bc1db325756e","Type":"ContainerDied","Data":"dd6b3e9122790e32b74ea3513a2806ca1077f9d0aa1958fcd25db8068be98606"} Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.335857 4847 scope.go:117] "RemoveContainer" containerID="dd6b3e9122790e32b74ea3513a2806ca1077f9d0aa1958fcd25db8068be98606" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.607147 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.758182 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-869bw\" (UniqueName: \"kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw\") pod \"0375fa1c-b349-44b5-8ba6-1d1afe1715ce\" (UID: \"0375fa1c-b349-44b5-8ba6-1d1afe1715ce\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.770642 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.780741 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw" (OuterVolumeSpecName: "kube-api-access-869bw") pod "0375fa1c-b349-44b5-8ba6-1d1afe1715ce" (UID: "0375fa1c-b349-44b5-8ba6-1d1afe1715ce"). InnerVolumeSpecName "kube-api-access-869bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.807629 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.832653 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.860573 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-869bw\" (UniqueName: \"kubernetes.io/projected/0375fa1c-b349-44b5-8ba6-1d1afe1715ce-kube-api-access-869bw\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964375 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tsb6\" (UniqueName: \"kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6\") pod \"7250520c-bcaf-4564-9155-8ecada7c6880\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964439 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data\") pod \"7250520c-bcaf-4564-9155-8ecada7c6880\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964502 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtwxv\" (UniqueName: \"kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv\") pod \"16f1c8da-07de-457e-a7f4-a16db587196b\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964564 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle\") pod \"7250520c-bcaf-4564-9155-8ecada7c6880\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964587 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data\") pod \"16f1c8da-07de-457e-a7f4-a16db587196b\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964804 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom\") pod \"7250520c-bcaf-4564-9155-8ecada7c6880\" (UID: \"7250520c-bcaf-4564-9155-8ecada7c6880\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964858 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom\") pod \"16f1c8da-07de-457e-a7f4-a16db587196b\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.964928 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle\") pod \"16f1c8da-07de-457e-a7f4-a16db587196b\" (UID: \"16f1c8da-07de-457e-a7f4-a16db587196b\") " Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.975426 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv" (OuterVolumeSpecName: "kube-api-access-qtwxv") pod "16f1c8da-07de-457e-a7f4-a16db587196b" (UID: "16f1c8da-07de-457e-a7f4-a16db587196b"). InnerVolumeSpecName "kube-api-access-qtwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.976465 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7250520c-bcaf-4564-9155-8ecada7c6880" (UID: "7250520c-bcaf-4564-9155-8ecada7c6880"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.976886 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6" (OuterVolumeSpecName: "kube-api-access-5tsb6") pod "7250520c-bcaf-4564-9155-8ecada7c6880" (UID: "7250520c-bcaf-4564-9155-8ecada7c6880"). InnerVolumeSpecName "kube-api-access-5tsb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:30 crc kubenswrapper[4847]: I0218 00:49:30.976918 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16f1c8da-07de-457e-a7f4-a16db587196b" (UID: "16f1c8da-07de-457e-a7f4-a16db587196b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.039685 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7250520c-bcaf-4564-9155-8ecada7c6880" (UID: "7250520c-bcaf-4564-9155-8ecada7c6880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.053202 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.061623 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data" (OuterVolumeSpecName: "config-data") pod "7250520c-bcaf-4564-9155-8ecada7c6880" (UID: "7250520c-bcaf-4564-9155-8ecada7c6880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066845 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066866 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066875 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tsb6\" (UniqueName: \"kubernetes.io/projected/7250520c-bcaf-4564-9155-8ecada7c6880-kube-api-access-5tsb6\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066886 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066894 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtwxv\" (UniqueName: \"kubernetes.io/projected/16f1c8da-07de-457e-a7f4-a16db587196b-kube-api-access-qtwxv\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.066902 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7250520c-bcaf-4564-9155-8ecada7c6880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.074715 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.078801 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f1c8da-07de-457e-a7f4-a16db587196b" (UID: "16f1c8da-07de-457e-a7f4-a16db587196b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.082821 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data" (OuterVolumeSpecName: "config-data") pod "16f1c8da-07de-457e-a7f4-a16db587196b" (UID: "16f1c8da-07de-457e-a7f4-a16db587196b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.169202 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle\") pod \"81f5a395-7a57-4aac-9c38-35207716eb18\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.169358 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data\") pod \"81f5a395-7a57-4aac-9c38-35207716eb18\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.169494 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbp9l\" (UniqueName: \"kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l\") pod \"81f5a395-7a57-4aac-9c38-35207716eb18\" (UID: \"81f5a395-7a57-4aac-9c38-35207716eb18\") " Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.169972 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.169983 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f1c8da-07de-457e-a7f4-a16db587196b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.180422 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l" (OuterVolumeSpecName: "kube-api-access-gbp9l") pod "81f5a395-7a57-4aac-9c38-35207716eb18" (UID: "81f5a395-7a57-4aac-9c38-35207716eb18"). InnerVolumeSpecName "kube-api-access-gbp9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.216796 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81f5a395-7a57-4aac-9c38-35207716eb18" (UID: "81f5a395-7a57-4aac-9c38-35207716eb18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.271694 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbp9l\" (UniqueName: \"kubernetes.io/projected/81f5a395-7a57-4aac-9c38-35207716eb18-kube-api-access-gbp9l\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.271722 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.271918 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data" (OuterVolumeSpecName: "config-data") pod "81f5a395-7a57-4aac-9c38-35207716eb18" (UID: "81f5a395-7a57-4aac-9c38-35207716eb18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.354525 4847 generic.go:334] "Generic (PLEG): container finished" podID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" exitCode=1 Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.354625 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b87f4965-bck5v" event={"ID":"0198073d-b902-4914-a519-0c9ec3aed4eb","Type":"ContainerDied","Data":"2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.354679 4847 scope.go:117] "RemoveContainer" containerID="bf492a26b49bf95664eee5b06a8467c368423d298bb2c5931c11187282828860" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.355586 4847 scope.go:117] "RemoveContainer" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.356003 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58b87f4965-bck5v_openstack(0198073d-b902-4914-a519-0c9ec3aed4eb)\"" pod="openstack/heat-cfnapi-58b87f4965-bck5v" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.372547 4847 generic.go:334] "Generic (PLEG): container finished" podID="04eb603b-ceea-4448-98ee-bc1db325756e" containerID="304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa" exitCode=1 Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.372662 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d8485fcfd-qf9k4" event={"ID":"04eb603b-ceea-4448-98ee-bc1db325756e","Type":"ContainerDied","Data":"304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.373430 4847 scope.go:117] "RemoveContainer" containerID="304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.373533 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81f5a395-7a57-4aac-9c38-35207716eb18-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.373726 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d8485fcfd-qf9k4_openstack(04eb603b-ceea-4448-98ee-bc1db325756e)\"" pod="openstack/heat-api-5d8485fcfd-qf9k4" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.386808 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0375fa1c-b349-44b5-8ba6-1d1afe1715ce","Type":"ContainerDied","Data":"11e51f3e9867395e9fc48fdef2d8622a8b5a2417ef35c3fa0f6a0bbd35553a19"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.387012 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.412430 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-59794cdfcf-5hdcv" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.424941 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-59794cdfcf-5hdcv" event={"ID":"7250520c-bcaf-4564-9155-8ecada7c6880","Type":"ContainerDied","Data":"46c5f4faa67bb4bbb22bf81156bc1eaca3b40221230142cbfa6aaf2ee605428a"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.425452 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f1d368a-d1df-4e38-b82a-7cd8911050cc","Type":"ContainerStarted","Data":"379736fd626bca65bc827d4b62d98c14f48238a18175a919fb55d197bfd7e9c2"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.438498 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7df4cf8969-f69sk" event={"ID":"30cfe0d1-2602-42ae-b1b3-3f4e562c13c6","Type":"ContainerStarted","Data":"a555944645f559025d9f6d700b65f2a0aa004aa5aba752804f4b5861eaf967fe"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.438894 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.438947 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.441837 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" event={"ID":"724e605e-6796-4384-8832-ab9bcec6a585","Type":"ContainerStarted","Data":"43ed40ef491f02f803725ca581ef3e9abc3d124779c98ee0c59cb435c048a0e1"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.442074 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.448274 4847 scope.go:117] "RemoveContainer" containerID="dd6b3e9122790e32b74ea3513a2806ca1077f9d0aa1958fcd25db8068be98606" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.448502 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" event={"ID":"16f1c8da-07de-457e-a7f4-a16db587196b","Type":"ContainerDied","Data":"47927807051acc03b76065ba0ea8e030a1e643a8b77b3e91b62a5be3d82281f5"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.448557 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-79cfb99699-ctzx2" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.455037 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"81f5a395-7a57-4aac-9c38-35207716eb18","Type":"ContainerDied","Data":"65e22442076f115d1b32dc44b1d82890665225931824a302cc28372ac880a000"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.455371 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.479983 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerStarted","Data":"b0e236975f42ef777e673a6184de644642c742436f941ab41e418490ec392356"} Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.487668 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7df4cf8969-f69sk" podStartSLOduration=13.487637798 podStartE2EDuration="13.487637798s" podCreationTimestamp="2026-02-18 00:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:31.46295649 +0000 UTC m=+1444.840307432" watchObservedRunningTime="2026-02-18 00:49:31.487637798 +0000 UTC m=+1444.864988740" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.504882 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" podStartSLOduration=6.504856379 podStartE2EDuration="6.504856379s" podCreationTimestamp="2026-02-18 00:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:31.485762191 +0000 UTC m=+1444.863113133" watchObservedRunningTime="2026-02-18 00:49:31.504856379 +0000 UTC m=+1444.882207321" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.631097 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.641260 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.645967 4847 scope.go:117] "RemoveContainer" containerID="fb5591ed215956a552a616c15a648d98d59faa59c1ad578b2f4c6e631afa20ea" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.649490 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.649965 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" containerName="kube-state-metrics" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.649984 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" containerName="kube-state-metrics" Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.650020 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7250520c-bcaf-4564-9155-8ecada7c6880" containerName="heat-api" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650027 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7250520c-bcaf-4564-9155-8ecada7c6880" containerName="heat-api" Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.650064 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f1c8da-07de-457e-a7f4-a16db587196b" containerName="heat-cfnapi" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650072 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f1c8da-07de-457e-a7f4-a16db587196b" containerName="heat-cfnapi" Feb 18 00:49:31 crc kubenswrapper[4847]: E0218 00:49:31.650084 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f5a395-7a57-4aac-9c38-35207716eb18" containerName="mysqld-exporter" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650089 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f5a395-7a57-4aac-9c38-35207716eb18" containerName="mysqld-exporter" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650270 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f1c8da-07de-457e-a7f4-a16db587196b" containerName="heat-cfnapi" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650283 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" containerName="kube-state-metrics" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650292 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7250520c-bcaf-4564-9155-8ecada7c6880" containerName="heat-api" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.650304 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f5a395-7a57-4aac-9c38-35207716eb18" containerName="mysqld-exporter" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.651151 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.658748 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.658758 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.668573 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.674071 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.684848 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-59794cdfcf-5hdcv"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.704741 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.718764 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-79cfb99699-ctzx2"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.730303 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.739112 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.739792 4847 scope.go:117] "RemoveContainer" containerID="5880016d5ff0b9d563c1e2e80c6087e3ada7a2bfaeee8e08fcef0d94f3600ccd" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.749812 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.751506 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.754249 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.755200 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.787104 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.794485 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-api-access-fmkk2\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.794617 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.794653 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.794674 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.806885 4847 scope.go:117] "RemoveContainer" containerID="f2de1251e50ab11e24d78c937ef4ddaa207cbf2710a76bc4ac6ef167d01c33e0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.860829 4847 scope.go:117] "RemoveContainer" containerID="a424ae1212c82519dc92eda5fe818e5dd7409135ce09d1fa203fd98a9bde5015" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896157 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-config-data\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896209 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896233 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-api-access-fmkk2\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896293 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896325 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896345 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4ff\" (UniqueName: \"kubernetes.io/projected/551ec97c-df77-4223-abff-f7d7eb766736-kube-api-access-fs4ff\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.896373 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.897190 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.901293 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.904096 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.910025 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f660a69e-33ac-40d0-93f8-68f496ea44f3-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.911256 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkk2\" (UniqueName: \"kubernetes.io/projected/f660a69e-33ac-40d0-93f8-68f496ea44f3-kube-api-access-fmkk2\") pod \"kube-state-metrics-0\" (UID: \"f660a69e-33ac-40d0-93f8-68f496ea44f3\") " pod="openstack/kube-state-metrics-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.998817 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4ff\" (UniqueName: \"kubernetes.io/projected/551ec97c-df77-4223-abff-f7d7eb766736-kube-api-access-fs4ff\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.999216 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-config-data\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.999249 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:31 crc kubenswrapper[4847]: I0218 00:49:31.999305 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.004097 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-config-data\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.004223 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.007056 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/551ec97c-df77-4223-abff-f7d7eb766736-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.013427 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4ff\" (UniqueName: \"kubernetes.io/projected/551ec97c-df77-4223-abff-f7d7eb766736-kube-api-access-fs4ff\") pod \"mysqld-exporter-0\" (UID: \"551ec97c-df77-4223-abff-f7d7eb766736\") " pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.047281 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.091171 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.517520 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerStarted","Data":"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6"} Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.530815 4847 scope.go:117] "RemoveContainer" containerID="304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa" Feb 18 00:49:32 crc kubenswrapper[4847]: E0218 00:49:32.531095 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d8485fcfd-qf9k4_openstack(04eb603b-ceea-4448-98ee-bc1db325756e)\"" pod="openstack/heat-api-5d8485fcfd-qf9k4" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.558077 4847 generic.go:334] "Generic (PLEG): container finished" podID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerID="643a355cc4288a509bc6c4144ab495e8828a61cf1fe162f22092013c465f4281" exitCode=0 Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.558137 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerDied","Data":"643a355cc4288a509bc6c4144ab495e8828a61cf1fe162f22092013c465f4281"} Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.561267 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f1d368a-d1df-4e38-b82a-7cd8911050cc","Type":"ContainerStarted","Data":"3bffe5b1e5f0ef71c99ed1f8bb5b9617494df6029d961fbc6257561083593602"} Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.589164 4847 scope.go:117] "RemoveContainer" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" Feb 18 00:49:32 crc kubenswrapper[4847]: E0218 00:49:32.589435 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58b87f4965-bck5v_openstack(0198073d-b902-4914-a519-0c9ec3aed4eb)\"" pod="openstack/heat-cfnapi-58b87f4965-bck5v" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.666915 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 18 00:49:32 crc kubenswrapper[4847]: I0218 00:49:32.814408 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="40db5dc9-34a9-467b-9617-56ee9fc2d7e0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.190:8776/healthcheck\": dial tcp 10.217.0.190:8776: i/o timeout (Client.Timeout exceeded while awaiting headers)" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.045331 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.087377 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config\") pod \"ddb80342-6498-4e44-aa6d-72bba457dbbe\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.087786 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle\") pod \"ddb80342-6498-4e44-aa6d-72bba457dbbe\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.087829 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs\") pod \"ddb80342-6498-4e44-aa6d-72bba457dbbe\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.087880 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config\") pod \"ddb80342-6498-4e44-aa6d-72bba457dbbe\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.088020 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdksx\" (UniqueName: \"kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx\") pod \"ddb80342-6498-4e44-aa6d-72bba457dbbe\" (UID: \"ddb80342-6498-4e44-aa6d-72bba457dbbe\") " Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.095985 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ddb80342-6498-4e44-aa6d-72bba457dbbe" (UID: "ddb80342-6498-4e44-aa6d-72bba457dbbe"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.110476 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx" (OuterVolumeSpecName: "kube-api-access-zdksx") pod "ddb80342-6498-4e44-aa6d-72bba457dbbe" (UID: "ddb80342-6498-4e44-aa6d-72bba457dbbe"). InnerVolumeSpecName "kube-api-access-zdksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.191243 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdksx\" (UniqueName: \"kubernetes.io/projected/ddb80342-6498-4e44-aa6d-72bba457dbbe-kube-api-access-zdksx\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.191273 4847 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.198044 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.216944 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddb80342-6498-4e44-aa6d-72bba457dbbe" (UID: "ddb80342-6498-4e44-aa6d-72bba457dbbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:33 crc kubenswrapper[4847]: W0218 00:49:33.233803 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod551ec97c_df77_4223_abff_f7d7eb766736.slice/crio-b67001f78078d55632af79e2be50e09e7956584583bd0c3357e78f87dfd13d93 WatchSource:0}: Error finding container b67001f78078d55632af79e2be50e09e7956584583bd0c3357e78f87dfd13d93: Status 404 returned error can't find the container with id b67001f78078d55632af79e2be50e09e7956584583bd0c3357e78f87dfd13d93 Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.265837 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config" (OuterVolumeSpecName: "config") pod "ddb80342-6498-4e44-aa6d-72bba457dbbe" (UID: "ddb80342-6498-4e44-aa6d-72bba457dbbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.265895 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ddb80342-6498-4e44-aa6d-72bba457dbbe" (UID: "ddb80342-6498-4e44-aa6d-72bba457dbbe"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.293375 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.293679 4847 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.293772 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ddb80342-6498-4e44-aa6d-72bba457dbbe-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.450482 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0375fa1c-b349-44b5-8ba6-1d1afe1715ce" path="/var/lib/kubelet/pods/0375fa1c-b349-44b5-8ba6-1d1afe1715ce/volumes" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.451762 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f1c8da-07de-457e-a7f4-a16db587196b" path="/var/lib/kubelet/pods/16f1c8da-07de-457e-a7f4-a16db587196b/volumes" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.460565 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7250520c-bcaf-4564-9155-8ecada7c6880" path="/var/lib/kubelet/pods/7250520c-bcaf-4564-9155-8ecada7c6880/volumes" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.463320 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f5a395-7a57-4aac-9c38-35207716eb18" path="/var/lib/kubelet/pods/81f5a395-7a57-4aac-9c38-35207716eb18/volumes" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.563673 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pdsmh"] Feb 18 00:49:33 crc kubenswrapper[4847]: E0218 00:49:33.564489 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-api" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.564575 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-api" Feb 18 00:49:33 crc kubenswrapper[4847]: E0218 00:49:33.564669 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-httpd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.564726 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-httpd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.565025 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-api" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.565111 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" containerName="neutron-httpd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.565974 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.599321 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pdsmh"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.622885 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xx9bl"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.626398 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.638576 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerStarted","Data":"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.651097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f660a69e-33ac-40d0-93f8-68f496ea44f3","Type":"ContainerStarted","Data":"1af90f5575a4eadbae3c2bcf169c815d6bdf014aacb9ea9743bb6f2defc49b5e"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.651157 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.651170 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f660a69e-33ac-40d0-93f8-68f496ea44f3","Type":"ContainerStarted","Data":"05f2b1dc0e5aaa5bff58c688e3f194496a86b59cf860dd789c6d502297be398e"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.659585 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.659714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5snf\" (UniqueName: \"kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.659997 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6f1d368a-d1df-4e38-b82a-7cd8911050cc","Type":"ContainerStarted","Data":"37eaab9381ef5c21f9e3387411718094193ee36bd668504a4cba8035525d78bc"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.660693 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.683580 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5857d66f7d-gqg2m" event={"ID":"ddb80342-6498-4e44-aa6d-72bba457dbbe","Type":"ContainerDied","Data":"9fb00cbbed76cc8954c89c2ecc5d5760b24f2f4dc25a935ba12eba44fb52342d"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.683650 4847 scope.go:117] "RemoveContainer" containerID="4f49aba9c883fc0dffc7b09f488580c619196525f4257b14613b0e8caa3ab209" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.683756 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5857d66f7d-gqg2m" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.695092 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"551ec97c-df77-4223-abff-f7d7eb766736","Type":"ContainerStarted","Data":"b67001f78078d55632af79e2be50e09e7956584583bd0c3357e78f87dfd13d93"} Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.725835 4847 scope.go:117] "RemoveContainer" containerID="643a355cc4288a509bc6c4144ab495e8828a61cf1fe162f22092013c465f4281" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.732835 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xx9bl"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.756326 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fqndf"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.757593 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.765677 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-92b2-account-create-update-7v9rd"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.767194 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.768675 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5snf\" (UniqueName: \"kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.768859 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.768903 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.768947 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc7nc\" (UniqueName: \"kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.774894 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.779344 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fqndf"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.787273 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.2371720489999998 podStartE2EDuration="2.787250724s" podCreationTimestamp="2026-02-18 00:49:31 +0000 UTC" firstStartedPulling="2026-02-18 00:49:32.70292466 +0000 UTC m=+1446.080275592" lastFinishedPulling="2026-02-18 00:49:33.253003325 +0000 UTC m=+1446.630354267" observedRunningTime="2026-02-18 00:49:33.684865132 +0000 UTC m=+1447.062216074" watchObservedRunningTime="2026-02-18 00:49:33.787250724 +0000 UTC m=+1447.164601666" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.787320 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.803281 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-92b2-account-create-update-7v9rd"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.803469 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.803450249 podStartE2EDuration="4.803450249s" podCreationTimestamp="2026-02-18 00:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:33.707027886 +0000 UTC m=+1447.084378838" watchObservedRunningTime="2026-02-18 00:49:33.803450249 +0000 UTC m=+1447.180801191" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.805825 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5snf\" (UniqueName: \"kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf\") pod \"nova-api-db-create-pdsmh\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.872973 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.873085 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.873186 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dq47\" (UniqueName: \"kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.873227 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.873287 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc7nc\" (UniqueName: \"kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.873365 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dx5d\" (UniqueName: \"kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.874322 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.903183 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.919674 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.930349 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5857d66f7d-gqg2m"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.936138 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc7nc\" (UniqueName: \"kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc\") pod \"nova-cell0-db-create-xx9bl\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.958100 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.960107 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5ce0-account-create-update-5qbbm"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.961830 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.965223 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.973207 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ce0-account-create-update-5qbbm"] Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.975010 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dx5d\" (UniqueName: \"kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.975095 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.975149 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.975207 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dq47\" (UniqueName: \"kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.976192 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:33 crc kubenswrapper[4847]: I0218 00:49:33.977199 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.004609 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dq47\" (UniqueName: \"kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47\") pod \"nova-cell1-db-create-fqndf\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.008573 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dx5d\" (UniqueName: \"kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d\") pod \"nova-api-92b2-account-create-update-7v9rd\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.011877 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c4f1-account-create-update-qvp96"] Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.025734 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c4f1-account-create-update-qvp96"] Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.025969 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.035416 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.078830 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wfs\" (UniqueName: \"kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.078912 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.136730 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.139153 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.181368 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn8tr\" (UniqueName: \"kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.181457 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.181486 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wfs\" (UniqueName: \"kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.181563 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.182705 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.231029 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.231459 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wfs\" (UniqueName: \"kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs\") pod \"nova-cell0-5ce0-account-create-update-5qbbm\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.286691 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn8tr\" (UniqueName: \"kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.286804 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.288126 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.305577 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.320164 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn8tr\" (UniqueName: \"kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr\") pod \"nova-cell1-c4f1-account-create-update-qvp96\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.398322 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.398657 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.399003 4847 scope.go:117] "RemoveContainer" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" Feb 18 00:49:34 crc kubenswrapper[4847]: E0218 00:49:34.399361 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58b87f4965-bck5v_openstack(0198073d-b902-4914-a519-0c9ec3aed4eb)\"" pod="openstack/heat-cfnapi-58b87f4965-bck5v" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.452883 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.572065 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pdsmh"] Feb 18 00:49:34 crc kubenswrapper[4847]: W0218 00:49:34.586181 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddefca7ab_d2c8_4c4a_910f_06bebeba7b81.slice/crio-270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3 WatchSource:0}: Error finding container 270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3: Status 404 returned error can't find the container with id 270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3 Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.717794 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"551ec97c-df77-4223-abff-f7d7eb766736","Type":"ContainerStarted","Data":"d55ad8792fb1452fcf5ec76de88eaaf608b90f2ba3cba817159fdb9e9e97a26c"} Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.724822 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerStarted","Data":"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde"} Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.726662 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdsmh" event={"ID":"defca7ab-d2c8-4c4a-910f-06bebeba7b81","Type":"ContainerStarted","Data":"270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3"} Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.740002 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.155902319 podStartE2EDuration="3.739982845s" podCreationTimestamp="2026-02-18 00:49:31 +0000 UTC" firstStartedPulling="2026-02-18 00:49:33.246858671 +0000 UTC m=+1446.624209613" lastFinishedPulling="2026-02-18 00:49:33.830939207 +0000 UTC m=+1447.208290139" observedRunningTime="2026-02-18 00:49:34.739183025 +0000 UTC m=+1448.116533987" watchObservedRunningTime="2026-02-18 00:49:34.739982845 +0000 UTC m=+1448.117333787" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.750005 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.750256 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.750962 4847 scope.go:117] "RemoveContainer" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" Feb 18 00:49:34 crc kubenswrapper[4847]: E0218 00:49:34.752031 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-58b87f4965-bck5v_openstack(0198073d-b902-4914-a519-0c9ec3aed4eb)\"" pod="openstack/heat-cfnapi-58b87f4965-bck5v" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.753588 4847 scope.go:117] "RemoveContainer" containerID="304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa" Feb 18 00:49:34 crc kubenswrapper[4847]: E0218 00:49:34.753894 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5d8485fcfd-qf9k4_openstack(04eb603b-ceea-4448-98ee-bc1db325756e)\"" pod="openstack/heat-api-5d8485fcfd-qf9k4" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" Feb 18 00:49:34 crc kubenswrapper[4847]: I0218 00:49:34.769120 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xx9bl"] Feb 18 00:49:34 crc kubenswrapper[4847]: W0218 00:49:34.821631 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2217ced3_3917_43e6_8c1b_23c7184f4591.slice/crio-0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a WatchSource:0}: Error finding container 0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a: Status 404 returned error can't find the container with id 0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.173998 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-92b2-account-create-update-7v9rd"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.183907 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fqndf"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.262909 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.502066 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb80342-6498-4e44-aa6d-72bba457dbbe" path="/var/lib/kubelet/pods/ddb80342-6498-4e44-aa6d-72bba457dbbe/volumes" Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.502762 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5ce0-account-create-update-5qbbm"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.502783 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c4f1-account-create-update-qvp96"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.750169 4847 generic.go:334] "Generic (PLEG): container finished" podID="2217ced3-3917-43e6-8c1b-23c7184f4591" containerID="2bf67bf8504506f4301446e987e9da6642153f44c026a1c70b6e5f265abb2a36" exitCode=0 Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.750233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xx9bl" event={"ID":"2217ced3-3917-43e6-8c1b-23c7184f4591","Type":"ContainerDied","Data":"2bf67bf8504506f4301446e987e9da6642153f44c026a1c70b6e5f265abb2a36"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.750263 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xx9bl" event={"ID":"2217ced3-3917-43e6-8c1b-23c7184f4591","Type":"ContainerStarted","Data":"0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.752113 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-92b2-account-create-update-7v9rd" event={"ID":"9808ca0d-1d9f-4692-ae85-975a6ca3822f","Type":"ContainerStarted","Data":"9eb3be4d4b9ccaeaa0d26743ed2210d2abd92e5b8797c776b4e80823b17da279"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.752141 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-92b2-account-create-update-7v9rd" event={"ID":"9808ca0d-1d9f-4692-ae85-975a6ca3822f","Type":"ContainerStarted","Data":"bc5765d9607562b54aae2a8b7257ec7f619bdfe1cd25845b017e60313ea2ffd3"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.754423 4847 generic.go:334] "Generic (PLEG): container finished" podID="defca7ab-d2c8-4c4a-910f-06bebeba7b81" containerID="19d89bfa0587c78c3bf032b2abb5fbaf0deba53e413e48ed44e4cdcf0e342593" exitCode=0 Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.754626 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdsmh" event={"ID":"defca7ab-d2c8-4c4a-910f-06bebeba7b81","Type":"ContainerDied","Data":"19d89bfa0587c78c3bf032b2abb5fbaf0deba53e413e48ed44e4cdcf0e342593"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.771271 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" event={"ID":"3caed2d8-3c83-45dd-946b-b4765bb99f58","Type":"ContainerStarted","Data":"de9166a98653ddf263d0a42269055647f4d2516437bbd2586eeee6a545a23ff5"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.783427 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fqndf" event={"ID":"3ec0cbce-5157-43e7-9aba-7973b14170ce","Type":"ContainerStarted","Data":"a7b09064f64997187a8f355a0de5b41e789a08a3729f585b59f264e17ef2f8aa"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.783476 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fqndf" event={"ID":"3ec0cbce-5157-43e7-9aba-7973b14170ce","Type":"ContainerStarted","Data":"11d932da4daa8042f67f01b2421449348d80b888ce92a22e64f838df58e8c35d"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.800534 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" event={"ID":"0b5e6ecd-a8a5-4722-8195-aa62753ce56f","Type":"ContainerStarted","Data":"a885117ecc80ea5a80a2f1997ee7bf986fe2c82d9a7df9c5fe93a84bf4b3f4f9"} Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.835668 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-92b2-account-create-update-7v9rd" podStartSLOduration=2.835646232 podStartE2EDuration="2.835646232s" podCreationTimestamp="2026-02-18 00:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:35.809948609 +0000 UTC m=+1449.187299551" watchObservedRunningTime="2026-02-18 00:49:35.835646232 +0000 UTC m=+1449.212997184" Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.853783 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-fqndf" podStartSLOduration=2.8537626659999997 podStartE2EDuration="2.853762666s" podCreationTimestamp="2026-02-18 00:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:35.830624357 +0000 UTC m=+1449.207975299" watchObservedRunningTime="2026-02-18 00:49:35.853762666 +0000 UTC m=+1449.231113598" Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.874346 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.879066 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" podStartSLOduration=2.879044618 podStartE2EDuration="2.879044618s" podCreationTimestamp="2026-02-18 00:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:49:35.847037517 +0000 UTC m=+1449.224388459" watchObservedRunningTime="2026-02-18 00:49:35.879044618 +0000 UTC m=+1449.256395560" Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.953199 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:49:35 crc kubenswrapper[4847]: I0218 00:49:35.953712 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="dnsmasq-dns" containerID="cri-o://e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97" gracePeriod=10 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.043882 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.638286 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720134 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7jzv\" (UniqueName: \"kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720315 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720385 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720453 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720524 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.720563 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc\") pod \"e0a31394-e534-4372-9f15-344df4565d6a\" (UID: \"e0a31394-e534-4372-9f15-344df4565d6a\") " Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.745839 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv" (OuterVolumeSpecName: "kube-api-access-r7jzv") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "kube-api-access-r7jzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.831459 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7jzv\" (UniqueName: \"kubernetes.io/projected/e0a31394-e534-4372-9f15-344df4565d6a-kube-api-access-r7jzv\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.839353 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.851084 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.855995 4847 generic.go:334] "Generic (PLEG): container finished" podID="0b5e6ecd-a8a5-4722-8195-aa62753ce56f" containerID="867c9a3b4ad951a551d08d4df8b1f470196105ae232ff95cfd38e6bd1305ccf0" exitCode=0 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.856093 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" event={"ID":"0b5e6ecd-a8a5-4722-8195-aa62753ce56f","Type":"ContainerDied","Data":"867c9a3b4ad951a551d08d4df8b1f470196105ae232ff95cfd38e6bd1305ccf0"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.868188 4847 generic.go:334] "Generic (PLEG): container finished" podID="e0a31394-e534-4372-9f15-344df4565d6a" containerID="e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97" exitCode=0 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.868554 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerDied","Data":"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.868669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" event={"ID":"e0a31394-e534-4372-9f15-344df4565d6a","Type":"ContainerDied","Data":"bc619f389e8d313cb38817e3901f1865faf4fce364fe2b72e7c3737f1ac5156f"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.868768 4847 scope.go:117] "RemoveContainer" containerID="e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.868980 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-cvlv7" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.882910 4847 generic.go:334] "Generic (PLEG): container finished" podID="9808ca0d-1d9f-4692-ae85-975a6ca3822f" containerID="9eb3be4d4b9ccaeaa0d26743ed2210d2abd92e5b8797c776b4e80823b17da279" exitCode=0 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.882995 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-92b2-account-create-update-7v9rd" event={"ID":"9808ca0d-1d9f-4692-ae85-975a6ca3822f","Type":"ContainerDied","Data":"9eb3be4d4b9ccaeaa0d26743ed2210d2abd92e5b8797c776b4e80823b17da279"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.884097 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.891771 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config" (OuterVolumeSpecName: "config") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.903634 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerStarted","Data":"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.903816 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-central-agent" containerID="cri-o://53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6" gracePeriod=30 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.903908 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="proxy-httpd" containerID="cri-o://fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f" gracePeriod=30 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.903946 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="sg-core" containerID="cri-o://fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde" gracePeriod=30 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.903980 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-notification-agent" containerID="cri-o://f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4" gracePeriod=30 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.904137 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.919917 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e0a31394-e534-4372-9f15-344df4565d6a" (UID: "e0a31394-e534-4372-9f15-344df4565d6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.933119 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.933490 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.933509 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.933517 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.933544 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0a31394-e534-4372-9f15-344df4565d6a-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.937315 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.652019516 podStartE2EDuration="7.93729004s" podCreationTimestamp="2026-02-18 00:49:29 +0000 UTC" firstStartedPulling="2026-02-18 00:49:30.817771916 +0000 UTC m=+1444.195122858" lastFinishedPulling="2026-02-18 00:49:35.10304244 +0000 UTC m=+1448.480393382" observedRunningTime="2026-02-18 00:49:36.926066169 +0000 UTC m=+1450.303417111" watchObservedRunningTime="2026-02-18 00:49:36.93729004 +0000 UTC m=+1450.314640982" Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.945809 4847 generic.go:334] "Generic (PLEG): container finished" podID="3caed2d8-3c83-45dd-946b-b4765bb99f58" containerID="966c5437769cfb517be9b685d1cac9e5d886c0e82aed26df78e085789f6f123c" exitCode=0 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.945896 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" event={"ID":"3caed2d8-3c83-45dd-946b-b4765bb99f58","Type":"ContainerDied","Data":"966c5437769cfb517be9b685d1cac9e5d886c0e82aed26df78e085789f6f123c"} Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.950530 4847 generic.go:334] "Generic (PLEG): container finished" podID="3ec0cbce-5157-43e7-9aba-7973b14170ce" containerID="a7b09064f64997187a8f355a0de5b41e789a08a3729f585b59f264e17ef2f8aa" exitCode=0 Feb 18 00:49:36 crc kubenswrapper[4847]: I0218 00:49:36.950655 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fqndf" event={"ID":"3ec0cbce-5157-43e7-9aba-7973b14170ce","Type":"ContainerDied","Data":"a7b09064f64997187a8f355a0de5b41e789a08a3729f585b59f264e17ef2f8aa"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.003023 4847 scope.go:117] "RemoveContainer" containerID="4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.050365 4847 scope.go:117] "RemoveContainer" containerID="e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97" Feb 18 00:49:37 crc kubenswrapper[4847]: E0218 00:49:37.051462 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97\": container with ID starting with e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97 not found: ID does not exist" containerID="e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.051493 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97"} err="failed to get container status \"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97\": rpc error: code = NotFound desc = could not find container \"e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97\": container with ID starting with e5d6cf1391aa302afa7c2a918cc4ec9344b3dd0b96943e63210badd255367a97 not found: ID does not exist" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.051515 4847 scope.go:117] "RemoveContainer" containerID="4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665" Feb 18 00:49:37 crc kubenswrapper[4847]: E0218 00:49:37.052335 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665\": container with ID starting with 4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665 not found: ID does not exist" containerID="4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.052386 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665"} err="failed to get container status \"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665\": rpc error: code = NotFound desc = could not find container \"4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665\": container with ID starting with 4fc06426ddfd4f27a9e586355ed1dc32d38abb673d67ada5db1cf23441f0b665 not found: ID does not exist" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.287090 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.320009 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-cvlv7"] Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.402056 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.450736 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a31394-e534-4372-9f15-344df4565d6a" path="/var/lib/kubelet/pods/e0a31394-e534-4372-9f15-344df4565d6a/volumes" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.465954 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.549380 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts\") pod \"2217ced3-3917-43e6-8c1b-23c7184f4591\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.549539 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc7nc\" (UniqueName: \"kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc\") pod \"2217ced3-3917-43e6-8c1b-23c7184f4591\" (UID: \"2217ced3-3917-43e6-8c1b-23c7184f4591\") " Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.551442 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2217ced3-3917-43e6-8c1b-23c7184f4591" (UID: "2217ced3-3917-43e6-8c1b-23c7184f4591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.562834 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc" (OuterVolumeSpecName: "kube-api-access-rc7nc") pod "2217ced3-3917-43e6-8c1b-23c7184f4591" (UID: "2217ced3-3917-43e6-8c1b-23c7184f4591"). InnerVolumeSpecName "kube-api-access-rc7nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.656371 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc7nc\" (UniqueName: \"kubernetes.io/projected/2217ced3-3917-43e6-8c1b-23c7184f4591-kube-api-access-rc7nc\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.656396 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2217ced3-3917-43e6-8c1b-23c7184f4591-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.940846 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963350 4847 generic.go:334] "Generic (PLEG): container finished" podID="21c48173-1f36-48e4-be55-8a949632f022" containerID="fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f" exitCode=0 Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963381 4847 generic.go:334] "Generic (PLEG): container finished" podID="21c48173-1f36-48e4-be55-8a949632f022" containerID="fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde" exitCode=2 Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963389 4847 generic.go:334] "Generic (PLEG): container finished" podID="21c48173-1f36-48e4-be55-8a949632f022" containerID="f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4" exitCode=0 Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963425 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerDied","Data":"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963453 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerDied","Data":"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.963463 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerDied","Data":"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.965026 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pdsmh" event={"ID":"defca7ab-d2c8-4c4a-910f-06bebeba7b81","Type":"ContainerDied","Data":"270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.965051 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270426f3577a7c9b0b48031fa53825f2f6e59c5bf9049800c932d7f1793f60a3" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.965097 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pdsmh" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.967910 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xx9bl" event={"ID":"2217ced3-3917-43e6-8c1b-23c7184f4591","Type":"ContainerDied","Data":"0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a"} Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.967954 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aa4f4fde9688825fc41f61a8ebf30e48249b9c09ca667cd81de248d3def659a" Feb 18 00:49:37 crc kubenswrapper[4847]: I0218 00:49:37.968015 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xx9bl" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.097462 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5snf\" (UniqueName: \"kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf\") pod \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.097817 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts\") pod \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\" (UID: \"defca7ab-d2c8-4c4a-910f-06bebeba7b81\") " Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.099058 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "defca7ab-d2c8-4c4a-910f-06bebeba7b81" (UID: "defca7ab-d2c8-4c4a-910f-06bebeba7b81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.114096 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf" (OuterVolumeSpecName: "kube-api-access-q5snf") pod "defca7ab-d2c8-4c4a-910f-06bebeba7b81" (UID: "defca7ab-d2c8-4c4a-910f-06bebeba7b81"). InnerVolumeSpecName "kube-api-access-q5snf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.180662 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5fd77b47d6-ms5hf" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.202039 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/defca7ab-d2c8-4c4a-910f-06bebeba7b81-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.202077 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5snf\" (UniqueName: \"kubernetes.io/projected/defca7ab-d2c8-4c4a-910f-06bebeba7b81-kube-api-access-q5snf\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.231082 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-75d56c557b-p6pn6" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.278099 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.351424 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f4b564c84-4zd7z" Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.397928 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.537275 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.537546 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-556dbf5b5b-fmjz4" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-log" containerID="cri-o://69456edca1a4d92be728d83efe1bc2e0767d48bce249cbe098d4830d884ffe42" gracePeriod=30 Feb 18 00:49:38 crc kubenswrapper[4847]: I0218 00:49:38.538054 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-556dbf5b5b-fmjz4" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-api" containerID="cri-o://e2dc111804a9faef6ff8700a4a8c34574288c869ce3a771223c6787eeb8a0276" gracePeriod=30 Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.004953 4847 generic.go:334] "Generic (PLEG): container finished" podID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerID="69456edca1a4d92be728d83efe1bc2e0767d48bce249cbe098d4830d884ffe42" exitCode=143 Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.005006 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerDied","Data":"69456edca1a4d92be728d83efe1bc2e0767d48bce249cbe098d4830d884ffe42"} Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.275826 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7df4cf8969-f69sk" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.553265 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.561906 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.573518 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.587895 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.609927 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.625877 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674394 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom\") pod \"0198073d-b902-4914-a519-0c9ec3aed4eb\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674433 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn8tr\" (UniqueName: \"kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr\") pod \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674456 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom\") pod \"04eb603b-ceea-4448-98ee-bc1db325756e\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674541 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jbj7\" (UniqueName: \"kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7\") pod \"04eb603b-ceea-4448-98ee-bc1db325756e\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674588 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data\") pod \"0198073d-b902-4914-a519-0c9ec3aed4eb\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674668 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle\") pod \"04eb603b-ceea-4448-98ee-bc1db325756e\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674692 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts\") pod \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674731 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data\") pod \"04eb603b-ceea-4448-98ee-bc1db325756e\" (UID: \"04eb603b-ceea-4448-98ee-bc1db325756e\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674756 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dx5d\" (UniqueName: \"kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d\") pod \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\" (UID: \"9808ca0d-1d9f-4692-ae85-975a6ca3822f\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674799 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle\") pod \"0198073d-b902-4914-a519-0c9ec3aed4eb\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674823 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts\") pod \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\" (UID: \"0b5e6ecd-a8a5-4722-8195-aa62753ce56f\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.674853 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz4jc\" (UniqueName: \"kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc\") pod \"0198073d-b902-4914-a519-0c9ec3aed4eb\" (UID: \"0198073d-b902-4914-a519-0c9ec3aed4eb\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.676176 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9808ca0d-1d9f-4692-ae85-975a6ca3822f" (UID: "9808ca0d-1d9f-4692-ae85-975a6ca3822f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.681564 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d" (OuterVolumeSpecName: "kube-api-access-6dx5d") pod "9808ca0d-1d9f-4692-ae85-975a6ca3822f" (UID: "9808ca0d-1d9f-4692-ae85-975a6ca3822f"). InnerVolumeSpecName "kube-api-access-6dx5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.683690 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc" (OuterVolumeSpecName: "kube-api-access-fz4jc") pod "0198073d-b902-4914-a519-0c9ec3aed4eb" (UID: "0198073d-b902-4914-a519-0c9ec3aed4eb"). InnerVolumeSpecName "kube-api-access-fz4jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.685234 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr" (OuterVolumeSpecName: "kube-api-access-sn8tr") pod "0b5e6ecd-a8a5-4722-8195-aa62753ce56f" (UID: "0b5e6ecd-a8a5-4722-8195-aa62753ce56f"). InnerVolumeSpecName "kube-api-access-sn8tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.685691 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7" (OuterVolumeSpecName: "kube-api-access-7jbj7") pod "04eb603b-ceea-4448-98ee-bc1db325756e" (UID: "04eb603b-ceea-4448-98ee-bc1db325756e"). InnerVolumeSpecName "kube-api-access-7jbj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.690642 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b5e6ecd-a8a5-4722-8195-aa62753ce56f" (UID: "0b5e6ecd-a8a5-4722-8195-aa62753ce56f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.692893 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0198073d-b902-4914-a519-0c9ec3aed4eb" (UID: "0198073d-b902-4914-a519-0c9ec3aed4eb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.705092 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04eb603b-ceea-4448-98ee-bc1db325756e" (UID: "04eb603b-ceea-4448-98ee-bc1db325756e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.756752 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0198073d-b902-4914-a519-0c9ec3aed4eb" (UID: "0198073d-b902-4914-a519-0c9ec3aed4eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.760183 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data" (OuterVolumeSpecName: "config-data") pod "0198073d-b902-4914-a519-0c9ec3aed4eb" (UID: "0198073d-b902-4914-a519-0c9ec3aed4eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.762497 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data" (OuterVolumeSpecName: "config-data") pod "04eb603b-ceea-4448-98ee-bc1db325756e" (UID: "04eb603b-ceea-4448-98ee-bc1db325756e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.771388 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04eb603b-ceea-4448-98ee-bc1db325756e" (UID: "04eb603b-ceea-4448-98ee-bc1db325756e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.776177 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wfs\" (UniqueName: \"kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs\") pod \"3caed2d8-3c83-45dd-946b-b4765bb99f58\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.776345 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dq47\" (UniqueName: \"kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47\") pod \"3ec0cbce-5157-43e7-9aba-7973b14170ce\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.776456 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts\") pod \"3ec0cbce-5157-43e7-9aba-7973b14170ce\" (UID: \"3ec0cbce-5157-43e7-9aba-7973b14170ce\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.776578 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts\") pod \"3caed2d8-3c83-45dd-946b-b4765bb99f58\" (UID: \"3caed2d8-3c83-45dd-946b-b4765bb99f58\") " Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777080 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn8tr\" (UniqueName: \"kubernetes.io/projected/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-kube-api-access-sn8tr\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777154 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777212 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jbj7\" (UniqueName: \"kubernetes.io/projected/04eb603b-ceea-4448-98ee-bc1db325756e-kube-api-access-7jbj7\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777278 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777335 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9808ca0d-1d9f-4692-ae85-975a6ca3822f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777389 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777442 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04eb603b-ceea-4448-98ee-bc1db325756e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777501 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dx5d\" (UniqueName: \"kubernetes.io/projected/9808ca0d-1d9f-4692-ae85-975a6ca3822f-kube-api-access-6dx5d\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777558 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777808 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b5e6ecd-a8a5-4722-8195-aa62753ce56f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.777884 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz4jc\" (UniqueName: \"kubernetes.io/projected/0198073d-b902-4914-a519-0c9ec3aed4eb-kube-api-access-fz4jc\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.778180 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0198073d-b902-4914-a519-0c9ec3aed4eb-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.778687 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3caed2d8-3c83-45dd-946b-b4765bb99f58" (UID: "3caed2d8-3c83-45dd-946b-b4765bb99f58"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.779119 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ec0cbce-5157-43e7-9aba-7973b14170ce" (UID: "3ec0cbce-5157-43e7-9aba-7973b14170ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.779565 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs" (OuterVolumeSpecName: "kube-api-access-t7wfs") pod "3caed2d8-3c83-45dd-946b-b4765bb99f58" (UID: "3caed2d8-3c83-45dd-946b-b4765bb99f58"). InnerVolumeSpecName "kube-api-access-t7wfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.781763 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47" (OuterVolumeSpecName: "kube-api-access-9dq47") pod "3ec0cbce-5157-43e7-9aba-7973b14170ce" (UID: "3ec0cbce-5157-43e7-9aba-7973b14170ce"). InnerVolumeSpecName "kube-api-access-9dq47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.879742 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3caed2d8-3c83-45dd-946b-b4765bb99f58-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.879774 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wfs\" (UniqueName: \"kubernetes.io/projected/3caed2d8-3c83-45dd-946b-b4765bb99f58-kube-api-access-t7wfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.879786 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dq47\" (UniqueName: \"kubernetes.io/projected/3ec0cbce-5157-43e7-9aba-7973b14170ce-kube-api-access-9dq47\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:39 crc kubenswrapper[4847]: I0218 00:49:39.879794 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ec0cbce-5157-43e7-9aba-7973b14170ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.019316 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.019309 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c4f1-account-create-update-qvp96" event={"ID":"0b5e6ecd-a8a5-4722-8195-aa62753ce56f","Type":"ContainerDied","Data":"a885117ecc80ea5a80a2f1997ee7bf986fe2c82d9a7df9c5fe93a84bf4b3f4f9"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.019448 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a885117ecc80ea5a80a2f1997ee7bf986fe2c82d9a7df9c5fe93a84bf4b3f4f9" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.021202 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-92b2-account-create-update-7v9rd" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.021225 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-92b2-account-create-update-7v9rd" event={"ID":"9808ca0d-1d9f-4692-ae85-975a6ca3822f","Type":"ContainerDied","Data":"bc5765d9607562b54aae2a8b7257ec7f619bdfe1cd25845b017e60313ea2ffd3"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.021279 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc5765d9607562b54aae2a8b7257ec7f619bdfe1cd25845b017e60313ea2ffd3" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.023188 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-58b87f4965-bck5v" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.023209 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-58b87f4965-bck5v" event={"ID":"0198073d-b902-4914-a519-0c9ec3aed4eb","Type":"ContainerDied","Data":"0c294f14912bfa7e14460ac41736c66070f90f76f73cd7a3bc8a84326dfdd1c6"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.023286 4847 scope.go:117] "RemoveContainer" containerID="2ee2c371d2aa3b3fe92920a6403a8eadb7c067f47a9af41015c17502df3fd689" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.025069 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d8485fcfd-qf9k4" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.028697 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d8485fcfd-qf9k4" event={"ID":"04eb603b-ceea-4448-98ee-bc1db325756e","Type":"ContainerDied","Data":"7b1778f12e3fe3506fd7412f530ec38b6fb1b57ca1bd0d79eb111462b306da42"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.034239 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.034421 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5ce0-account-create-update-5qbbm" event={"ID":"3caed2d8-3c83-45dd-946b-b4765bb99f58","Type":"ContainerDied","Data":"de9166a98653ddf263d0a42269055647f4d2516437bbd2586eeee6a545a23ff5"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.034445 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9166a98653ddf263d0a42269055647f4d2516437bbd2586eeee6a545a23ff5" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.040492 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fqndf" event={"ID":"3ec0cbce-5157-43e7-9aba-7973b14170ce","Type":"ContainerDied","Data":"11d932da4daa8042f67f01b2421449348d80b888ce92a22e64f838df58e8c35d"} Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.040530 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11d932da4daa8042f67f01b2421449348d80b888ce92a22e64f838df58e8c35d" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.040590 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fqndf" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.063814 4847 scope.go:117] "RemoveContainer" containerID="304bef84d5aeef3e6a105acfcc836c3f3a91864d01747848e19050805409cefa" Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.083423 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.094481 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-58b87f4965-bck5v"] Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.105022 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:40 crc kubenswrapper[4847]: I0218 00:49:40.114859 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5d8485fcfd-qf9k4"] Feb 18 00:49:41 crc kubenswrapper[4847]: I0218 00:49:41.415698 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" path="/var/lib/kubelet/pods/0198073d-b902-4914-a519-0c9ec3aed4eb/volumes" Feb 18 00:49:41 crc kubenswrapper[4847]: I0218 00:49:41.417583 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" path="/var/lib/kubelet/pods/04eb603b-ceea-4448-98ee-bc1db325756e/volumes" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.082747 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.099495 4847 generic.go:334] "Generic (PLEG): container finished" podID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerID="e2dc111804a9faef6ff8700a4a8c34574288c869ce3a771223c6787eeb8a0276" exitCode=0 Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.099552 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerDied","Data":"e2dc111804a9faef6ff8700a4a8c34574288c869ce3a771223c6787eeb8a0276"} Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.513060 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581211 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581364 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lznr7\" (UniqueName: \"kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581391 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581419 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581455 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581488 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.581531 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle\") pod \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\" (UID: \"7d6a2670-a6f9-4fe7-8356-16cee45d0167\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.584957 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs" (OuterVolumeSpecName: "logs") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.614976 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.619749 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7" (OuterVolumeSpecName: "kube-api-access-lznr7") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "kube-api-access-lznr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.623094 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts" (OuterVolumeSpecName: "scripts") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.682191 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data" (OuterVolumeSpecName: "config-data") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.683633 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lznr7\" (UniqueName: \"kubernetes.io/projected/7d6a2670-a6f9-4fe7-8356-16cee45d0167-kube-api-access-lznr7\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.683652 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6a2670-a6f9-4fe7-8356-16cee45d0167-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.683665 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.683676 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.741204 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.755511 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.785907 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp2c4\" (UniqueName: \"kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790753 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790813 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790909 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790934 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790963 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.790988 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle\") pod \"21c48173-1f36-48e4-be55-8a949632f022\" (UID: \"21c48173-1f36-48e4-be55-8a949632f022\") " Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.792195 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.793369 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4" (OuterVolumeSpecName: "kube-api-access-tp2c4") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "kube-api-access-tp2c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.795130 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.795372 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.813584 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts" (OuterVolumeSpecName: "scripts") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.820578 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.837752 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7d6a2670-a6f9-4fe7-8356-16cee45d0167" (UID: "7d6a2670-a6f9-4fe7-8356-16cee45d0167"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.849223 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894333 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp2c4\" (UniqueName: \"kubernetes.io/projected/21c48173-1f36-48e4-be55-8a949632f022-kube-api-access-tp2c4\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894362 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894371 4847 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894381 4847 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d6a2670-a6f9-4fe7-8356-16cee45d0167-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894390 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894398 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.894407 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/21c48173-1f36-48e4-be55-8a949632f022-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.910280 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.943689 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data" (OuterVolumeSpecName: "config-data") pod "21c48173-1f36-48e4-be55-8a949632f022" (UID: "21c48173-1f36-48e4-be55-8a949632f022"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.996707 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:42 crc kubenswrapper[4847]: I0218 00:49:42.996743 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21c48173-1f36-48e4-be55-8a949632f022-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.116343 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-556dbf5b5b-fmjz4" event={"ID":"7d6a2670-a6f9-4fe7-8356-16cee45d0167","Type":"ContainerDied","Data":"6d59e58c79fb104820576f24a0b9b9995e49e202a92ee24487e55477fc033ebb"} Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.116375 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-556dbf5b5b-fmjz4" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.116437 4847 scope.go:117] "RemoveContainer" containerID="e2dc111804a9faef6ff8700a4a8c34574288c869ce3a771223c6787eeb8a0276" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.121194 4847 generic.go:334] "Generic (PLEG): container finished" podID="21c48173-1f36-48e4-be55-8a949632f022" containerID="53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6" exitCode=0 Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.121246 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerDied","Data":"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6"} Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.121279 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"21c48173-1f36-48e4-be55-8a949632f022","Type":"ContainerDied","Data":"b0e236975f42ef777e673a6184de644642c742436f941ab41e418490ec392356"} Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.121345 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.165257 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.171316 4847 scope.go:117] "RemoveContainer" containerID="69456edca1a4d92be728d83efe1bc2e0767d48bce249cbe098d4830d884ffe42" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.176417 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-556dbf5b5b-fmjz4"] Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.187729 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.198286 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.204636 4847 scope.go:117] "RemoveContainer" containerID="fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213478 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213897 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5e6ecd-a8a5-4722-8195-aa62753ce56f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213910 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5e6ecd-a8a5-4722-8195-aa62753ce56f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213922 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213928 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213935 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-central-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213941 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-central-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213953 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec0cbce-5157-43e7-9aba-7973b14170ce" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213958 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec0cbce-5157-43e7-9aba-7973b14170ce" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213973 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defca7ab-d2c8-4c4a-910f-06bebeba7b81" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213978 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="defca7ab-d2c8-4c4a-910f-06bebeba7b81" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.213992 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-notification-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.213998 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-notification-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214017 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214022 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214033 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="sg-core" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214039 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="sg-core" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214052 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214058 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214065 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9808ca0d-1d9f-4692-ae85-975a6ca3822f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214070 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9808ca0d-1d9f-4692-ae85-975a6ca3822f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214083 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="init" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214088 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="init" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214100 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214106 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-api" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214121 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="dnsmasq-dns" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214126 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="dnsmasq-dns" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214134 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-log" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214140 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-log" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214149 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caed2d8-3c83-45dd-946b-b4765bb99f58" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214155 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caed2d8-3c83-45dd-946b-b4765bb99f58" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214164 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2217ced3-3917-43e6-8c1b-23c7184f4591" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214170 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2217ced3-3917-43e6-8c1b-23c7184f4591" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214179 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="proxy-httpd" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214184 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="proxy-httpd" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214370 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214380 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="proxy-httpd" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214389 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec0cbce-5157-43e7-9aba-7973b14170ce" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214401 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3caed2d8-3c83-45dd-946b-b4765bb99f58" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214408 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214416 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-central-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214426 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-log" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214433 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9808ca0d-1d9f-4692-ae85-975a6ca3822f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214440 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="defca7ab-d2c8-4c4a-910f-06bebeba7b81" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214449 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="ceilometer-notification-agent" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214457 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214466 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0198073d-b902-4914-a519-0c9ec3aed4eb" containerName="heat-cfnapi" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214476 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2217ced3-3917-43e6-8c1b-23c7184f4591" containerName="mariadb-database-create" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214485 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" containerName="placement-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214496 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c48173-1f36-48e4-be55-8a949632f022" containerName="sg-core" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214504 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5e6ecd-a8a5-4722-8195-aa62753ce56f" containerName="mariadb-account-create-update" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214513 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a31394-e534-4372-9f15-344df4565d6a" containerName="dnsmasq-dns" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.214705 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.214713 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="04eb603b-ceea-4448-98ee-bc1db325756e" containerName="heat-api" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.216267 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.221873 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.222023 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.222153 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.229316 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.243653 4847 scope.go:117] "RemoveContainer" containerID="fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.290017 4847 scope.go:117] "RemoveContainer" containerID="f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.301871 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.301962 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.301993 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.302052 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.302077 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.302106 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.302131 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.302154 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgl6h\" (UniqueName: \"kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.314452 4847 scope.go:117] "RemoveContainer" containerID="53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.339932 4847 scope.go:117] "RemoveContainer" containerID="fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.340421 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f\": container with ID starting with fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f not found: ID does not exist" containerID="fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.340457 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f"} err="failed to get container status \"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f\": rpc error: code = NotFound desc = could not find container \"fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f\": container with ID starting with fafeade87f55cf9a053e0b88adab11f0816aa0ff32c3496f8d59a70212dfd71f not found: ID does not exist" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.340480 4847 scope.go:117] "RemoveContainer" containerID="fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.340909 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde\": container with ID starting with fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde not found: ID does not exist" containerID="fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.340939 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde"} err="failed to get container status \"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde\": rpc error: code = NotFound desc = could not find container \"fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde\": container with ID starting with fe85d1d694a250ab2f5684f0445700c12255dab0ea10c0e7c3a3b1d88442ccde not found: ID does not exist" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.340961 4847 scope.go:117] "RemoveContainer" containerID="f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.341213 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4\": container with ID starting with f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4 not found: ID does not exist" containerID="f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.341243 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4"} err="failed to get container status \"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4\": rpc error: code = NotFound desc = could not find container \"f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4\": container with ID starting with f77d21c6986dfc0a4e4db4d9a5fdc6e8fb9566716c94bd019802d0699eae6ff4 not found: ID does not exist" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.341260 4847 scope.go:117] "RemoveContainer" containerID="53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6" Feb 18 00:49:43 crc kubenswrapper[4847]: E0218 00:49:43.341476 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6\": container with ID starting with 53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6 not found: ID does not exist" containerID="53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.341498 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6"} err="failed to get container status \"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6\": rpc error: code = NotFound desc = could not find container \"53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6\": container with ID starting with 53ae0d0f446d72257aa6c21dc2631e58ba9a5ff7a325daaaa3d7384a4ee380a6 not found: ID does not exist" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404147 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404192 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404263 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404284 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404316 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404342 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404362 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgl6h\" (UniqueName: \"kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404409 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.404818 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.405175 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.408138 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.410322 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.410731 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.410793 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.413693 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.417229 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c48173-1f36-48e4-be55-8a949632f022" path="/var/lib/kubelet/pods/21c48173-1f36-48e4-be55-8a949632f022/volumes" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.418012 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6a2670-a6f9-4fe7-8356-16cee45d0167" path="/var/lib/kubelet/pods/7d6a2670-a6f9-4fe7-8356-16cee45d0167/volumes" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.426312 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgl6h\" (UniqueName: \"kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h\") pod \"ceilometer-0\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " pod="openstack/ceilometer-0" Feb 18 00:49:43 crc kubenswrapper[4847]: I0218 00:49:43.565189 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:44 crc kubenswrapper[4847]: W0218 00:49:44.081441 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7de9c24e_68dc_4cbc_85fe_3a4012fa2fee.slice/crio-3eb914144bffa2c24603aa6fd90cc7aaf4e546f851938b8f33d3f45ce3314287 WatchSource:0}: Error finding container 3eb914144bffa2c24603aa6fd90cc7aaf4e546f851938b8f33d3f45ce3314287: Status 404 returned error can't find the container with id 3eb914144bffa2c24603aa6fd90cc7aaf4e546f851938b8f33d3f45ce3314287 Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.084498 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.130751 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerStarted","Data":"3eb914144bffa2c24603aa6fd90cc7aaf4e546f851938b8f33d3f45ce3314287"} Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.309791 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-69tbz"] Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.311982 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.318140 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5vtws" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.318404 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.318841 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.322502 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-69tbz"] Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.390324 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-67b9f7bd8b-phnps" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.423469 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.423549 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.423589 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.423708 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lqx\" (UniqueName: \"kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.451872 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.452179 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b8c9bd889-lvxrd" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" containerID="cri-o://eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" gracePeriod=60 Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.529238 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lqx\" (UniqueName: \"kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.529671 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.529742 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.529800 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.536570 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.541536 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.541826 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.548214 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lqx\" (UniqueName: \"kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx\") pod \"nova-cell0-conductor-db-sync-69tbz\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:44 crc kubenswrapper[4847]: I0218 00:49:44.682720 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:49:45 crc kubenswrapper[4847]: I0218 00:49:45.153277 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerStarted","Data":"89f7b96e7da535eb82f402e6513c835e08f20f6f3fd9ac1669631202c11956ec"} Feb 18 00:49:45 crc kubenswrapper[4847]: I0218 00:49:45.194903 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-69tbz"] Feb 18 00:49:45 crc kubenswrapper[4847]: E0218 00:49:45.978954 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:45 crc kubenswrapper[4847]: E0218 00:49:45.983282 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:45 crc kubenswrapper[4847]: E0218 00:49:45.984543 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:45 crc kubenswrapper[4847]: E0218 00:49:45.984642 4847 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b8c9bd889-lvxrd" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" Feb 18 00:49:46 crc kubenswrapper[4847]: I0218 00:49:46.181127 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerStarted","Data":"e0e9a24947ab409d9c4ce75895f740f9ae4c98ebf7d3780bcc14233625ac2970"} Feb 18 00:49:46 crc kubenswrapper[4847]: I0218 00:49:46.196734 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-69tbz" event={"ID":"ceb3804b-7097-4c08-9db9-8b08a71eb896","Type":"ContainerStarted","Data":"214d3ab3d000e3ae16ac8a137a5a562619cedab5ed583d4312b31ffd7dd6e183"} Feb 18 00:49:47 crc kubenswrapper[4847]: I0218 00:49:47.215649 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerStarted","Data":"d5b8ccb9b0987e3a3d77f7e6dde6802856161256b5d7fe56a43c0f4fb9a5379b"} Feb 18 00:49:48 crc kubenswrapper[4847]: I0218 00:49:48.231035 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerStarted","Data":"8a6586c6c912ca16b875dd56e5426952da5bc78d18ab5bf036d20118ca7c8147"} Feb 18 00:49:48 crc kubenswrapper[4847]: I0218 00:49:48.232240 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:49:48 crc kubenswrapper[4847]: I0218 00:49:48.253244 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.471811593 podStartE2EDuration="5.253228099s" podCreationTimestamp="2026-02-18 00:49:43 +0000 UTC" firstStartedPulling="2026-02-18 00:49:44.083470335 +0000 UTC m=+1457.460821267" lastFinishedPulling="2026-02-18 00:49:47.864886831 +0000 UTC m=+1461.242237773" observedRunningTime="2026-02-18 00:49:48.248197573 +0000 UTC m=+1461.625548515" watchObservedRunningTime="2026-02-18 00:49:48.253228099 +0000 UTC m=+1461.630579041" Feb 18 00:49:50 crc kubenswrapper[4847]: I0218 00:49:50.740924 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:50 crc kubenswrapper[4847]: I0218 00:49:50.741507 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-central-agent" containerID="cri-o://89f7b96e7da535eb82f402e6513c835e08f20f6f3fd9ac1669631202c11956ec" gracePeriod=30 Feb 18 00:49:50 crc kubenswrapper[4847]: I0218 00:49:50.741595 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="proxy-httpd" containerID="cri-o://8a6586c6c912ca16b875dd56e5426952da5bc78d18ab5bf036d20118ca7c8147" gracePeriod=30 Feb 18 00:49:50 crc kubenswrapper[4847]: I0218 00:49:50.741588 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="sg-core" containerID="cri-o://d5b8ccb9b0987e3a3d77f7e6dde6802856161256b5d7fe56a43c0f4fb9a5379b" gracePeriod=30 Feb 18 00:49:50 crc kubenswrapper[4847]: I0218 00:49:50.741660 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-notification-agent" containerID="cri-o://e0e9a24947ab409d9c4ce75895f740f9ae4c98ebf7d3780bcc14233625ac2970" gracePeriod=30 Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.274037 4847 generic.go:334] "Generic (PLEG): container finished" podID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" exitCode=0 Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.274113 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8c9bd889-lvxrd" event={"ID":"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e","Type":"ContainerDied","Data":"eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce"} Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.283955 4847 generic.go:334] "Generic (PLEG): container finished" podID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerID="8a6586c6c912ca16b875dd56e5426952da5bc78d18ab5bf036d20118ca7c8147" exitCode=0 Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.283992 4847 generic.go:334] "Generic (PLEG): container finished" podID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerID="d5b8ccb9b0987e3a3d77f7e6dde6802856161256b5d7fe56a43c0f4fb9a5379b" exitCode=2 Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.284019 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerDied","Data":"8a6586c6c912ca16b875dd56e5426952da5bc78d18ab5bf036d20118ca7c8147"} Feb 18 00:49:51 crc kubenswrapper[4847]: I0218 00:49:51.284054 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerDied","Data":"d5b8ccb9b0987e3a3d77f7e6dde6802856161256b5d7fe56a43c0f4fb9a5379b"} Feb 18 00:49:52 crc kubenswrapper[4847]: I0218 00:49:52.328700 4847 generic.go:334] "Generic (PLEG): container finished" podID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerID="e0e9a24947ab409d9c4ce75895f740f9ae4c98ebf7d3780bcc14233625ac2970" exitCode=0 Feb 18 00:49:52 crc kubenswrapper[4847]: I0218 00:49:52.328764 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerDied","Data":"e0e9a24947ab409d9c4ce75895f740f9ae4c98ebf7d3780bcc14233625ac2970"} Feb 18 00:49:54 crc kubenswrapper[4847]: I0218 00:49:54.093291 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:49:54 crc kubenswrapper[4847]: I0218 00:49:54.097974 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:49:55 crc kubenswrapper[4847]: I0218 00:49:55.380488 4847 generic.go:334] "Generic (PLEG): container finished" podID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerID="89f7b96e7da535eb82f402e6513c835e08f20f6f3fd9ac1669631202c11956ec" exitCode=0 Feb 18 00:49:55 crc kubenswrapper[4847]: I0218 00:49:55.380967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerDied","Data":"89f7b96e7da535eb82f402e6513c835e08f20f6f3fd9ac1669631202c11956ec"} Feb 18 00:49:55 crc kubenswrapper[4847]: E0218 00:49:55.979082 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce is running failed: container process not found" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:55 crc kubenswrapper[4847]: E0218 00:49:55.980104 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce is running failed: container process not found" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:55 crc kubenswrapper[4847]: E0218 00:49:55.980431 4847 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce is running failed: container process not found" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 18 00:49:55 crc kubenswrapper[4847]: E0218 00:49:55.980475 4847 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-5b8c9bd889-lvxrd" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.846366 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.880448 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle\") pod \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.881270 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data\") pod \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.881365 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom\") pod \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.881440 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxcb4\" (UniqueName: \"kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4\") pod \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\" (UID: \"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.889464 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" (UID: "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.899565 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4" (OuterVolumeSpecName: "kube-api-access-nxcb4") pod "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" (UID: "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e"). InnerVolumeSpecName "kube-api-access-nxcb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.937994 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" (UID: "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.961403 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.970889 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data" (OuterVolumeSpecName: "config-data") pod "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" (UID: "8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.983611 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.983655 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgl6h\" (UniqueName: \"kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.983737 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.983894 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.983926 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984005 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984049 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984045 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984087 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data\") pod \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\" (UID: \"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee\") " Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984534 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984558 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984569 4847 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984577 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.984586 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxcb4\" (UniqueName: \"kubernetes.io/projected/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e-kube-api-access-nxcb4\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.986389 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.991843 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts" (OuterVolumeSpecName: "scripts") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:56 crc kubenswrapper[4847]: I0218 00:49:56.995880 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h" (OuterVolumeSpecName: "kube-api-access-dgl6h") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "kube-api-access-dgl6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.038889 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.054916 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.089205 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.089245 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.089262 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.089274 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.089285 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgl6h\" (UniqueName: \"kubernetes.io/projected/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-kube-api-access-dgl6h\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.109663 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.141993 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data" (OuterVolumeSpecName: "config-data") pod "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" (UID: "7de9c24e-68dc-4cbc-85fe-3a4012fa2fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.191271 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.191522 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.446376 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b8c9bd889-lvxrd" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.447243 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b8c9bd889-lvxrd" event={"ID":"8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e","Type":"ContainerDied","Data":"5d117fc3b4d4dcfd9f7bc9678cb84b7b569918b81251026b539701341f651709"} Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.447304 4847 scope.go:117] "RemoveContainer" containerID="eca06a4157ff1cf372238e5b47a78413428dfb85975e0ea3aae4183ffbfd24ce" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.461901 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de9c24e-68dc-4cbc-85fe-3a4012fa2fee","Type":"ContainerDied","Data":"3eb914144bffa2c24603aa6fd90cc7aaf4e546f851938b8f33d3f45ce3314287"} Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.461937 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.464442 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-69tbz" event={"ID":"ceb3804b-7097-4c08-9db9-8b08a71eb896","Type":"ContainerStarted","Data":"68471a24a5b96a1956e52782b843f252bf133dfc56419b56793c73415b50f783"} Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.492811 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-69tbz" podStartSLOduration=2.311977341 podStartE2EDuration="13.492794558s" podCreationTimestamp="2026-02-18 00:49:44 +0000 UTC" firstStartedPulling="2026-02-18 00:49:45.199124954 +0000 UTC m=+1458.576475896" lastFinishedPulling="2026-02-18 00:49:56.379942171 +0000 UTC m=+1469.757293113" observedRunningTime="2026-02-18 00:49:57.489078995 +0000 UTC m=+1470.866429937" watchObservedRunningTime="2026-02-18 00:49:57.492794558 +0000 UTC m=+1470.870145500" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.521149 4847 scope.go:117] "RemoveContainer" containerID="8a6586c6c912ca16b875dd56e5426952da5bc78d18ab5bf036d20118ca7c8147" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.524340 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.547662 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b8c9bd889-lvxrd"] Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.560172 4847 scope.go:117] "RemoveContainer" containerID="d5b8ccb9b0987e3a3d77f7e6dde6802856161256b5d7fe56a43c0f4fb9a5379b" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.562434 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.572642 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.580659 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:57 crc kubenswrapper[4847]: E0218 00:49:57.581202 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="proxy-httpd" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581220 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="proxy-httpd" Feb 18 00:49:57 crc kubenswrapper[4847]: E0218 00:49:57.581235 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="sg-core" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581242 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="sg-core" Feb 18 00:49:57 crc kubenswrapper[4847]: E0218 00:49:57.581276 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-central-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581285 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-central-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: E0218 00:49:57.581296 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581301 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" Feb 18 00:49:57 crc kubenswrapper[4847]: E0218 00:49:57.581311 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-notification-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581318 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-notification-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581561 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" containerName="heat-engine" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581574 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-notification-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581581 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="sg-core" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581592 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="proxy-httpd" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.581617 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" containerName="ceilometer-central-agent" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.591152 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.592140 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.599511 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.599685 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.599769 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600289 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600363 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600390 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjv4h\" (UniqueName: \"kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600418 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600453 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600517 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600578 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.600613 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.603755 4847 scope.go:117] "RemoveContainer" containerID="e0e9a24947ab409d9c4ce75895f740f9ae4c98ebf7d3780bcc14233625ac2970" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.668771 4847 scope.go:117] "RemoveContainer" containerID="89f7b96e7da535eb82f402e6513c835e08f20f6f3fd9ac1669631202c11956ec" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702137 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702210 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702345 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702397 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702416 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702446 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702491 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.702512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjv4h\" (UniqueName: \"kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.703212 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.703646 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.707141 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.707815 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.710722 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.719078 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.720880 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjv4h\" (UniqueName: \"kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.724361 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " pod="openstack/ceilometer-0" Feb 18 00:49:57 crc kubenswrapper[4847]: I0218 00:49:57.948696 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:49:58 crc kubenswrapper[4847]: I0218 00:49:58.445324 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:58 crc kubenswrapper[4847]: I0218 00:49:58.478244 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerStarted","Data":"4eb63c6d800811eec14c406995dfdc137a68aa558fc5a763ae5e8ba602f0abfd"} Feb 18 00:49:58 crc kubenswrapper[4847]: I0218 00:49:58.670335 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:49:59 crc kubenswrapper[4847]: I0218 00:49:59.421653 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de9c24e-68dc-4cbc-85fe-3a4012fa2fee" path="/var/lib/kubelet/pods/7de9c24e-68dc-4cbc-85fe-3a4012fa2fee/volumes" Feb 18 00:49:59 crc kubenswrapper[4847]: I0218 00:49:59.422801 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e" path="/var/lib/kubelet/pods/8e389a17-3e84-4ed5-b2f3-59b4c42f9a8e/volumes" Feb 18 00:49:59 crc kubenswrapper[4847]: I0218 00:49:59.493659 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerStarted","Data":"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a"} Feb 18 00:50:00 crc kubenswrapper[4847]: I0218 00:50:00.503074 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerStarted","Data":"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6"} Feb 18 00:50:02 crc kubenswrapper[4847]: I0218 00:50:02.530501 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerStarted","Data":"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e"} Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.554527 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerStarted","Data":"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079"} Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.555050 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-central-agent" containerID="cri-o://8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a" gracePeriod=30 Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.555186 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.555466 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="sg-core" containerID="cri-o://ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e" gracePeriod=30 Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.555497 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="proxy-httpd" containerID="cri-o://bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079" gracePeriod=30 Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.555506 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-notification-agent" containerID="cri-o://cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6" gracePeriod=30 Feb 18 00:50:03 crc kubenswrapper[4847]: I0218 00:50:03.597187 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.981130721 podStartE2EDuration="6.597166253s" podCreationTimestamp="2026-02-18 00:49:57 +0000 UTC" firstStartedPulling="2026-02-18 00:49:58.443268633 +0000 UTC m=+1471.820619575" lastFinishedPulling="2026-02-18 00:50:03.059304155 +0000 UTC m=+1476.436655107" observedRunningTime="2026-02-18 00:50:03.585241525 +0000 UTC m=+1476.962592467" watchObservedRunningTime="2026-02-18 00:50:03.597166253 +0000 UTC m=+1476.974517195" Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.567879 4847 generic.go:334] "Generic (PLEG): container finished" podID="6efff60b-2776-4e4f-82cc-5b988291e869" containerID="bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079" exitCode=0 Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.568224 4847 generic.go:334] "Generic (PLEG): container finished" podID="6efff60b-2776-4e4f-82cc-5b988291e869" containerID="ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e" exitCode=2 Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.568233 4847 generic.go:334] "Generic (PLEG): container finished" podID="6efff60b-2776-4e4f-82cc-5b988291e869" containerID="cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6" exitCode=0 Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.567956 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerDied","Data":"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079"} Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.568276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerDied","Data":"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e"} Feb 18 00:50:04 crc kubenswrapper[4847]: I0218 00:50:04.568291 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerDied","Data":"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6"} Feb 18 00:50:08 crc kubenswrapper[4847]: I0218 00:50:08.620223 4847 generic.go:334] "Generic (PLEG): container finished" podID="ceb3804b-7097-4c08-9db9-8b08a71eb896" containerID="68471a24a5b96a1956e52782b843f252bf133dfc56419b56793c73415b50f783" exitCode=0 Feb 18 00:50:08 crc kubenswrapper[4847]: I0218 00:50:08.620368 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-69tbz" event={"ID":"ceb3804b-7097-4c08-9db9-8b08a71eb896","Type":"ContainerDied","Data":"68471a24a5b96a1956e52782b843f252bf133dfc56419b56793c73415b50f783"} Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.613851 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.634350 4847 generic.go:334] "Generic (PLEG): container finished" podID="6efff60b-2776-4e4f-82cc-5b988291e869" containerID="8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a" exitCode=0 Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.634649 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.635201 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerDied","Data":"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a"} Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.635243 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6efff60b-2776-4e4f-82cc-5b988291e869","Type":"ContainerDied","Data":"4eb63c6d800811eec14c406995dfdc137a68aa558fc5a763ae5e8ba602f0abfd"} Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.635570 4847 scope.go:117] "RemoveContainer" containerID="bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.661789 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.661921 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.661954 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjv4h\" (UniqueName: \"kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.662014 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.662155 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.662196 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.662284 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.662360 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts\") pod \"6efff60b-2776-4e4f-82cc-5b988291e869\" (UID: \"6efff60b-2776-4e4f-82cc-5b988291e869\") " Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.663340 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.669094 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.669245 4847 scope.go:117] "RemoveContainer" containerID="ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.669696 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts" (OuterVolumeSpecName: "scripts") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.683845 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h" (OuterVolumeSpecName: "kube-api-access-jjv4h") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "kube-api-access-jjv4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.712733 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.769957 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.769996 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjv4h\" (UniqueName: \"kubernetes.io/projected/6efff60b-2776-4e4f-82cc-5b988291e869-kube-api-access-jjv4h\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.770007 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.770015 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6efff60b-2776-4e4f-82cc-5b988291e869-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.770025 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.790800 4847 scope.go:117] "RemoveContainer" containerID="cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.849805 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.858788 4847 scope.go:117] "RemoveContainer" containerID="8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.873785 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.876015 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.876040 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.916425 4847 scope.go:117] "RemoveContainer" containerID="bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079" Feb 18 00:50:09 crc kubenswrapper[4847]: E0218 00:50:09.917621 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079\": container with ID starting with bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079 not found: ID does not exist" containerID="bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.917760 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079"} err="failed to get container status \"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079\": rpc error: code = NotFound desc = could not find container \"bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079\": container with ID starting with bd9d742be2c26811ef6c863150f395249d525e87db3b7ebe432a53ff950c1079 not found: ID does not exist" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.917859 4847 scope.go:117] "RemoveContainer" containerID="ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e" Feb 18 00:50:09 crc kubenswrapper[4847]: E0218 00:50:09.918346 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e\": container with ID starting with ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e not found: ID does not exist" containerID="ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.918402 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e"} err="failed to get container status \"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e\": rpc error: code = NotFound desc = could not find container \"ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e\": container with ID starting with ad754e8a512d4257ba16a8e2a6c99b2d3a7b528a6024ba0fd2ef02a289e5c37e not found: ID does not exist" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.918428 4847 scope.go:117] "RemoveContainer" containerID="cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6" Feb 18 00:50:09 crc kubenswrapper[4847]: E0218 00:50:09.918727 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6\": container with ID starting with cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6 not found: ID does not exist" containerID="cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.918821 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6"} err="failed to get container status \"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6\": rpc error: code = NotFound desc = could not find container \"cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6\": container with ID starting with cdb214fe1264134881b7127a4dd63685a691c7c607e04fab59c1f366632ca8c6 not found: ID does not exist" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.918887 4847 scope.go:117] "RemoveContainer" containerID="8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a" Feb 18 00:50:09 crc kubenswrapper[4847]: E0218 00:50:09.919264 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a\": container with ID starting with 8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a not found: ID does not exist" containerID="8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.919290 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a"} err="failed to get container status \"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a\": rpc error: code = NotFound desc = could not find container \"8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a\": container with ID starting with 8810ceaeb454c7a1fdf2257d5b4c98fd659f4f85ca97d441e85be33e6f76098a not found: ID does not exist" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.941884 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data" (OuterVolumeSpecName: "config-data") pod "6efff60b-2776-4e4f-82cc-5b988291e869" (UID: "6efff60b-2776-4e4f-82cc-5b988291e869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:09 crc kubenswrapper[4847]: I0218 00:50:09.987013 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6efff60b-2776-4e4f-82cc-5b988291e869-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.098760 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.193673 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data\") pod \"ceb3804b-7097-4c08-9db9-8b08a71eb896\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.194168 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts\") pod \"ceb3804b-7097-4c08-9db9-8b08a71eb896\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.194243 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lqx\" (UniqueName: \"kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx\") pod \"ceb3804b-7097-4c08-9db9-8b08a71eb896\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.194444 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle\") pod \"ceb3804b-7097-4c08-9db9-8b08a71eb896\" (UID: \"ceb3804b-7097-4c08-9db9-8b08a71eb896\") " Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.199033 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx" (OuterVolumeSpecName: "kube-api-access-q9lqx") pod "ceb3804b-7097-4c08-9db9-8b08a71eb896" (UID: "ceb3804b-7097-4c08-9db9-8b08a71eb896"). InnerVolumeSpecName "kube-api-access-q9lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.199211 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts" (OuterVolumeSpecName: "scripts") pod "ceb3804b-7097-4c08-9db9-8b08a71eb896" (UID: "ceb3804b-7097-4c08-9db9-8b08a71eb896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.232847 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb3804b-7097-4c08-9db9-8b08a71eb896" (UID: "ceb3804b-7097-4c08-9db9-8b08a71eb896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.238315 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data" (OuterVolumeSpecName: "config-data") pod "ceb3804b-7097-4c08-9db9-8b08a71eb896" (UID: "ceb3804b-7097-4c08-9db9-8b08a71eb896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.296639 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.296858 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lqx\" (UniqueName: \"kubernetes.io/projected/ceb3804b-7097-4c08-9db9-8b08a71eb896-kube-api-access-q9lqx\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.296948 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.297004 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb3804b-7097-4c08-9db9-8b08a71eb896-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.333279 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.342394 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.359177 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: E0218 00:50:10.360091 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb3804b-7097-4c08-9db9-8b08a71eb896" containerName="nova-cell0-conductor-db-sync" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360170 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb3804b-7097-4c08-9db9-8b08a71eb896" containerName="nova-cell0-conductor-db-sync" Feb 18 00:50:10 crc kubenswrapper[4847]: E0218 00:50:10.360246 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-central-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360319 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-central-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: E0218 00:50:10.360387 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="proxy-httpd" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360441 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="proxy-httpd" Feb 18 00:50:10 crc kubenswrapper[4847]: E0218 00:50:10.360498 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="sg-core" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360548 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="sg-core" Feb 18 00:50:10 crc kubenswrapper[4847]: E0218 00:50:10.360627 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-notification-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360690 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-notification-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.360942 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-central-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.361019 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb3804b-7097-4c08-9db9-8b08a71eb896" containerName="nova-cell0-conductor-db-sync" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.361076 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="ceilometer-notification-agent" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.361140 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="sg-core" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.361200 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" containerName="proxy-httpd" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.365860 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.368245 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.368511 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.372690 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.384966 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.409859 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkcrl\" (UniqueName: \"kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.409969 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.409997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.410023 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.410052 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.410076 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.410201 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.410225 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513077 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkcrl\" (UniqueName: \"kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513481 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513618 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513700 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513792 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.513864 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.514032 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.514125 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.514392 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.515544 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.519222 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.519642 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.521670 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.523871 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.525662 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.542587 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkcrl\" (UniqueName: \"kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl\") pod \"ceilometer-0\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.651592 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-69tbz" event={"ID":"ceb3804b-7097-4c08-9db9-8b08a71eb896","Type":"ContainerDied","Data":"214d3ab3d000e3ae16ac8a137a5a562619cedab5ed583d4312b31ffd7dd6e183"} Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.651656 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214d3ab3d000e3ae16ac8a137a5a562619cedab5ed583d4312b31ffd7dd6e183" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.651731 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-69tbz" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.682750 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.761910 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.763532 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.766344 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.766743 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5vtws" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.783975 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.820576 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.820760 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.821024 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvdv\" (UniqueName: \"kubernetes.io/projected/4ca6c74e-5f00-416d-aa49-5132671a351a-kube-api-access-7xvdv\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.923168 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.923301 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.923334 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xvdv\" (UniqueName: \"kubernetes.io/projected/4ca6c74e-5f00-416d-aa49-5132671a351a-kube-api-access-7xvdv\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.928102 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.930880 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ca6c74e-5f00-416d-aa49-5132671a351a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:10 crc kubenswrapper[4847]: I0218 00:50:10.939692 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xvdv\" (UniqueName: \"kubernetes.io/projected/4ca6c74e-5f00-416d-aa49-5132671a351a-kube-api-access-7xvdv\") pod \"nova-cell0-conductor-0\" (UID: \"4ca6c74e-5f00-416d-aa49-5132671a351a\") " pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.087480 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.194507 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.423925 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6efff60b-2776-4e4f-82cc-5b988291e869" path="/var/lib/kubelet/pods/6efff60b-2776-4e4f-82cc-5b988291e869/volumes" Feb 18 00:50:11 crc kubenswrapper[4847]: W0218 00:50:11.635961 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ca6c74e_5f00_416d_aa49_5132671a351a.slice/crio-4e98e420a3de4020121b6d5e92a34cedcc33cde3d986e41a83d4ef1edca2b007 WatchSource:0}: Error finding container 4e98e420a3de4020121b6d5e92a34cedcc33cde3d986e41a83d4ef1edca2b007: Status 404 returned error can't find the container with id 4e98e420a3de4020121b6d5e92a34cedcc33cde3d986e41a83d4ef1edca2b007 Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.637956 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.665455 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ca6c74e-5f00-416d-aa49-5132671a351a","Type":"ContainerStarted","Data":"4e98e420a3de4020121b6d5e92a34cedcc33cde3d986e41a83d4ef1edca2b007"} Feb 18 00:50:11 crc kubenswrapper[4847]: I0218 00:50:11.667276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerStarted","Data":"640e5a08b734b79b03a8d4ab923ab98afe0f16e107c820710f61870152a797c6"} Feb 18 00:50:12 crc kubenswrapper[4847]: I0218 00:50:12.679272 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerStarted","Data":"f5d4f54d55100799d054e69a181b629fcbd5eed99c3669d4db498dad071384d7"} Feb 18 00:50:12 crc kubenswrapper[4847]: I0218 00:50:12.680097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerStarted","Data":"e87fa9c7984bc0d5fceb700f4508698b9309b96bf4ef3ae01e22886c7b94c0c4"} Feb 18 00:50:12 crc kubenswrapper[4847]: I0218 00:50:12.683478 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4ca6c74e-5f00-416d-aa49-5132671a351a","Type":"ContainerStarted","Data":"e333dbaac2b8ed8f0f0867935b51df1873e4bb70d1e170b34139fd3d936d1d50"} Feb 18 00:50:12 crc kubenswrapper[4847]: I0218 00:50:12.684922 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:12 crc kubenswrapper[4847]: I0218 00:50:12.700668 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.700644518 podStartE2EDuration="2.700644518s" podCreationTimestamp="2026-02-18 00:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:12.700337551 +0000 UTC m=+1486.077688493" watchObservedRunningTime="2026-02-18 00:50:12.700644518 +0000 UTC m=+1486.077995470" Feb 18 00:50:13 crc kubenswrapper[4847]: I0218 00:50:13.720179 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerStarted","Data":"4884b0b1db047893e8ff5cbc0cf096ba65ec33d14799c4a00ed2048148ebf3ca"} Feb 18 00:50:14 crc kubenswrapper[4847]: I0218 00:50:14.733129 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerStarted","Data":"68b1b2ad9e5a678267ba435889034d5cc123ddc71093a2bda8f3f8fe7a8d05b0"} Feb 18 00:50:15 crc kubenswrapper[4847]: I0218 00:50:15.745666 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.120344 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.154554 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.986503501 podStartE2EDuration="6.154535668s" podCreationTimestamp="2026-02-18 00:50:10 +0000 UTC" firstStartedPulling="2026-02-18 00:50:11.20417777 +0000 UTC m=+1484.581528712" lastFinishedPulling="2026-02-18 00:50:14.372209937 +0000 UTC m=+1487.749560879" observedRunningTime="2026-02-18 00:50:14.758967595 +0000 UTC m=+1488.136318537" watchObservedRunningTime="2026-02-18 00:50:16.154535668 +0000 UTC m=+1489.531886610" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.665940 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-sbsm7"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.668968 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.674199 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.674398 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.685285 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sbsm7"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.755840 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.756035 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.756209 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrjk\" (UniqueName: \"kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.756267 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.848583 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.870872 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.871063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lrjk\" (UniqueName: \"kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.871129 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.871174 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.871904 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.878355 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.880474 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.881489 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.886126 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.889632 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.889962 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.894513 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.919292 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lrjk\" (UniqueName: \"kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk\") pod \"nova-cell0-cell-mapping-sbsm7\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.919367 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.930437 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973536 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973661 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973709 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973733 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973781 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973808 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbxtg\" (UniqueName: \"kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.973834 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz9b9\" (UniqueName: \"kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.980445 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.981962 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.992039 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 00:50:16 crc kubenswrapper[4847]: I0218 00:50:16.993085 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.069290 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075498 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075560 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075642 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075667 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075705 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075756 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075797 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbxtg\" (UniqueName: \"kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075825 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz9b9\" (UniqueName: \"kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075877 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.075913 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mlg\" (UniqueName: \"kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.082116 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.094024 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.131505 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.132316 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.138289 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.145369 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz9b9\" (UniqueName: \"kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9\") pod \"nova-scheduler-0\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.145756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbxtg\" (UniqueName: \"kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg\") pod \"nova-api-0\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.177360 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mlg\" (UniqueName: \"kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.177439 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.177506 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.184456 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.188271 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.190857 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.193554 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.195485 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.211282 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mlg\" (UniqueName: \"kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg\") pod \"nova-cell1-novncproxy-0\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.221325 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.222423 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.252111 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.254845 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.263923 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.280392 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.280822 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.280868 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.281528 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96hf\" (UniqueName: \"kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.308724 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.358282 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383386 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96hf\" (UniqueName: \"kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383485 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdknr\" (UniqueName: \"kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383509 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383531 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383569 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383589 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383627 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383670 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383710 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:17 crc kubenswrapper[4847]: I0218 00:50:17.383795 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.386829 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.398411 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.399025 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.426474 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96hf\" (UniqueName: \"kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf\") pod \"nova-metadata-0\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " pod="openstack/nova-metadata-0" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.488731 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.488928 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.489340 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.489677 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdknr\" (UniqueName: \"kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.489701 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.489764 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.491099 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.493886 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.497642 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.499008 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.509297 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.518091 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdknr\" (UniqueName: \"kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr\") pod \"dnsmasq-dns-568d7fd7cf-xmxcs\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.543827 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.582662 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.760773 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-sbsm7"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.857192 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.857488 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-central-agent" containerID="cri-o://e87fa9c7984bc0d5fceb700f4508698b9309b96bf4ef3ae01e22886c7b94c0c4" gracePeriod=30 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.857656 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="proxy-httpd" containerID="cri-o://68b1b2ad9e5a678267ba435889034d5cc123ddc71093a2bda8f3f8fe7a8d05b0" gracePeriod=30 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.857731 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-notification-agent" containerID="cri-o://f5d4f54d55100799d054e69a181b629fcbd5eed99c3669d4db498dad071384d7" gracePeriod=30 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:17.857732 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="sg-core" containerID="cri-o://4884b0b1db047893e8ff5cbc0cf096ba65ec33d14799c4a00ed2048148ebf3ca" gracePeriod=30 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.366088 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6bzdl"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.368530 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.375397 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.375953 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.396248 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6bzdl"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.419398 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.419444 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.419526 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559sp\" (UniqueName: \"kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.419591 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.522142 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.522637 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.522753 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-559sp\" (UniqueName: \"kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.522899 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.529569 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.530225 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.530800 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.541328 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-559sp\" (UniqueName: \"kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp\") pod \"nova-cell1-conductor-db-sync-6bzdl\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.692824 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.768268 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.812180 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:18 crc kubenswrapper[4847]: W0218 00:50:18.813199 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4e7ff27_612f_4c09_83a5_6405f65f4f86.slice/crio-e7dda8c3639cf5bf1d5ecbb09bb556998057e40025f8c225f0697f528d25d383 WatchSource:0}: Error finding container e7dda8c3639cf5bf1d5ecbb09bb556998057e40025f8c225f0697f528d25d383: Status 404 returned error can't find the container with id e7dda8c3639cf5bf1d5ecbb09bb556998057e40025f8c225f0697f528d25d383 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.854724 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.900024 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sbsm7" event={"ID":"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699","Type":"ContainerStarted","Data":"f014f3026c432a472e7ca049c99f28171b61b3457e422bd94f2e45b059f3f8da"} Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.900326 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sbsm7" event={"ID":"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699","Type":"ContainerStarted","Data":"a3ede99bc1f5655a2c3d386c787fb249b36b5460b93d1b32e610fe7bbfbd580c"} Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903172 4847 generic.go:334] "Generic (PLEG): container finished" podID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerID="68b1b2ad9e5a678267ba435889034d5cc123ddc71093a2bda8f3f8fe7a8d05b0" exitCode=0 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903193 4847 generic.go:334] "Generic (PLEG): container finished" podID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerID="4884b0b1db047893e8ff5cbc0cf096ba65ec33d14799c4a00ed2048148ebf3ca" exitCode=2 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903201 4847 generic.go:334] "Generic (PLEG): container finished" podID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerID="f5d4f54d55100799d054e69a181b629fcbd5eed99c3669d4db498dad071384d7" exitCode=0 Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903213 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerDied","Data":"68b1b2ad9e5a678267ba435889034d5cc123ddc71093a2bda8f3f8fe7a8d05b0"} Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903227 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerDied","Data":"4884b0b1db047893e8ff5cbc0cf096ba65ec33d14799c4a00ed2048148ebf3ca"} Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.903238 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerDied","Data":"f5d4f54d55100799d054e69a181b629fcbd5eed99c3669d4db498dad071384d7"} Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.909693 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.946993 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:18 crc kubenswrapper[4847]: I0218 00:50:18.953391 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-sbsm7" podStartSLOduration=2.9533658259999997 podStartE2EDuration="2.953365826s" podCreationTimestamp="2026-02-18 00:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:18.924128394 +0000 UTC m=+1492.301479336" watchObservedRunningTime="2026-02-18 00:50:18.953365826 +0000 UTC m=+1492.330716768" Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.427203 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6bzdl"] Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.929831 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerStarted","Data":"70fe711aa3af5aaa34ba0dabeea926811e27bffb349d3f592bb8cf595340d653"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.934853 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f64ddc2-419d-4a08-8418-d19033c2b549","Type":"ContainerStarted","Data":"c8cb75e6ec22dc35e13ab67aa1b550a1ba771b302c8c38cd4555a483bfe112c3"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.938214 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d54442ab-bec7-429c-ae47-6c781844eb4b","Type":"ContainerStarted","Data":"cac5780920d91cb2facebdb2c5571bf1c241a76b8c8ed383939593e5cbfa1523"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.943044 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerID="93ea82dfc5f84fc14afed49caad664b1fb6f8bbf78e132cb51c1ad7d6b9fd6e0" exitCode=0 Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.943088 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" event={"ID":"f4e7ff27-612f-4c09-83a5-6405f65f4f86","Type":"ContainerDied","Data":"93ea82dfc5f84fc14afed49caad664b1fb6f8bbf78e132cb51c1ad7d6b9fd6e0"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.943110 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" event={"ID":"f4e7ff27-612f-4c09-83a5-6405f65f4f86","Type":"ContainerStarted","Data":"e7dda8c3639cf5bf1d5ecbb09bb556998057e40025f8c225f0697f528d25d383"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.946970 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" event={"ID":"c41b6174-4c4e-48d6-b094-0af6d3781553","Type":"ContainerStarted","Data":"fa1c63c3075a1f894f94a3315b6b537ea1c25aabee71edc000ce5baa1aa47a48"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.947008 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" event={"ID":"c41b6174-4c4e-48d6-b094-0af6d3781553","Type":"ContainerStarted","Data":"cd22abb1e66b45c998f8d1443b81af183c426adae5c7cdf1b46699da7cf2398f"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.951829 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerStarted","Data":"51c98bc76fec7e94f33a7f36023178c266511fd173177ae7b0efc06b1eaf5805"} Feb 18 00:50:19 crc kubenswrapper[4847]: I0218 00:50:19.997628 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" podStartSLOduration=1.997586046 podStartE2EDuration="1.997586046s" podCreationTimestamp="2026-02-18 00:50:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:19.984577952 +0000 UTC m=+1493.361928894" watchObservedRunningTime="2026-02-18 00:50:19.997586046 +0000 UTC m=+1493.374936988" Feb 18 00:50:20 crc kubenswrapper[4847]: I0218 00:50:20.968755 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" event={"ID":"f4e7ff27-612f-4c09-83a5-6405f65f4f86","Type":"ContainerStarted","Data":"dc3d0fb14c2275dac72d9aeb961f788e14afeb57963c31c02378802f106304c0"} Feb 18 00:50:20 crc kubenswrapper[4847]: I0218 00:50:20.969693 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:20 crc kubenswrapper[4847]: I0218 00:50:20.998725 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" podStartSLOduration=3.998704026 podStartE2EDuration="3.998704026s" podCreationTimestamp="2026-02-18 00:50:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:20.992951117 +0000 UTC m=+1494.370302059" watchObservedRunningTime="2026-02-18 00:50:20.998704026 +0000 UTC m=+1494.376054968" Feb 18 00:50:21 crc kubenswrapper[4847]: I0218 00:50:21.375535 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:21 crc kubenswrapper[4847]: I0218 00:50:21.385644 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:21 crc kubenswrapper[4847]: I0218 00:50:21.980998 4847 generic.go:334] "Generic (PLEG): container finished" podID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerID="e87fa9c7984bc0d5fceb700f4508698b9309b96bf4ef3ae01e22886c7b94c0c4" exitCode=0 Feb 18 00:50:21 crc kubenswrapper[4847]: I0218 00:50:21.981076 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerDied","Data":"e87fa9c7984bc0d5fceb700f4508698b9309b96bf4ef3ae01e22886c7b94c0c4"} Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.513075 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582308 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582438 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582476 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582513 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582719 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582748 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582780 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkcrl\" (UniqueName: \"kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.582844 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle\") pod \"ab2a4151-c745-47c2-bb78-19325cff2a61\" (UID: \"ab2a4151-c745-47c2-bb78-19325cff2a61\") " Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.583442 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.583957 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.600301 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts" (OuterVolumeSpecName: "scripts") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.600645 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl" (OuterVolumeSpecName: "kube-api-access-jkcrl") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "kube-api-access-jkcrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.685908 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkcrl\" (UniqueName: \"kubernetes.io/projected/ab2a4151-c745-47c2-bb78-19325cff2a61-kube-api-access-jkcrl\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.685947 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.685961 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.685974 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a4151-c745-47c2-bb78-19325cff2a61-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.688190 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.726737 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.777094 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.788672 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.788704 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.788715 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.867467 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data" (OuterVolumeSpecName: "config-data") pod "ab2a4151-c745-47c2-bb78-19325cff2a61" (UID: "ab2a4151-c745-47c2-bb78-19325cff2a61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:22 crc kubenswrapper[4847]: I0218 00:50:22.890188 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a4151-c745-47c2-bb78-19325cff2a61-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.004998 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f64ddc2-419d-4a08-8418-d19033c2b549","Type":"ContainerStarted","Data":"cb7dbbb06a74dd6b04dbb76a2355b6cadda06a1117847475a3f9c4eb42874791"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.008577 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="d54442ab-bec7-429c-ae47-6c781844eb4b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7ec4b758586d9607221da99ec8f3966dd12b11114148d7700282698bfce92415" gracePeriod=30 Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.008699 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d54442ab-bec7-429c-ae47-6c781844eb4b","Type":"ContainerStarted","Data":"7ec4b758586d9607221da99ec8f3966dd12b11114148d7700282698bfce92415"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.017869 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerStarted","Data":"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.017914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerStarted","Data":"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.022689 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a4151-c745-47c2-bb78-19325cff2a61","Type":"ContainerDied","Data":"640e5a08b734b79b03a8d4ab923ab98afe0f16e107c820710f61870152a797c6"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.022732 4847 scope.go:117] "RemoveContainer" containerID="68b1b2ad9e5a678267ba435889034d5cc123ddc71093a2bda8f3f8fe7a8d05b0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.022867 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.031156 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerStarted","Data":"2fda87e2d268beee4b519656d4738f502c2059cc5df0e971986a493d52ab56c2"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.031201 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerStarted","Data":"8cbbea99e9e673e1548ce862f3993b8cfea9bf43ae00947a4fcc8f6bc37891ba"} Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.031322 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-log" containerID="cri-o://8cbbea99e9e673e1548ce862f3993b8cfea9bf43ae00947a4fcc8f6bc37891ba" gracePeriod=30 Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.031580 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-metadata" containerID="cri-o://2fda87e2d268beee4b519656d4738f502c2059cc5df0e971986a493d52ab56c2" gracePeriod=30 Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.048991 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.87698636 podStartE2EDuration="7.048968093s" podCreationTimestamp="2026-02-18 00:50:16 +0000 UTC" firstStartedPulling="2026-02-18 00:50:18.977234773 +0000 UTC m=+1492.354585715" lastFinishedPulling="2026-02-18 00:50:22.149216516 +0000 UTC m=+1495.526567448" observedRunningTime="2026-02-18 00:50:23.029435402 +0000 UTC m=+1496.406786364" watchObservedRunningTime="2026-02-18 00:50:23.048968093 +0000 UTC m=+1496.426319035" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.058595 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.786568796 podStartE2EDuration="7.058579125s" podCreationTimestamp="2026-02-18 00:50:16 +0000 UTC" firstStartedPulling="2026-02-18 00:50:18.885315233 +0000 UTC m=+1492.262666175" lastFinishedPulling="2026-02-18 00:50:22.157325562 +0000 UTC m=+1495.534676504" observedRunningTime="2026-02-18 00:50:23.055131502 +0000 UTC m=+1496.432482444" watchObservedRunningTime="2026-02-18 00:50:23.058579125 +0000 UTC m=+1496.435930067" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.078758 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.796757032 podStartE2EDuration="7.078739711s" podCreationTimestamp="2026-02-18 00:50:16 +0000 UTC" firstStartedPulling="2026-02-18 00:50:18.885559129 +0000 UTC m=+1492.262910081" lastFinishedPulling="2026-02-18 00:50:22.167541818 +0000 UTC m=+1495.544892760" observedRunningTime="2026-02-18 00:50:23.078120556 +0000 UTC m=+1496.455471498" watchObservedRunningTime="2026-02-18 00:50:23.078739711 +0000 UTC m=+1496.456090653" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.166233 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.9791667569999998 podStartE2EDuration="7.166203629s" podCreationTimestamp="2026-02-18 00:50:16 +0000 UTC" firstStartedPulling="2026-02-18 00:50:18.960501224 +0000 UTC m=+1492.337852166" lastFinishedPulling="2026-02-18 00:50:22.147538096 +0000 UTC m=+1495.524889038" observedRunningTime="2026-02-18 00:50:23.107885293 +0000 UTC m=+1496.485236235" watchObservedRunningTime="2026-02-18 00:50:23.166203629 +0000 UTC m=+1496.543554571" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.172667 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.175810 4847 scope.go:117] "RemoveContainer" containerID="4884b0b1db047893e8ff5cbc0cf096ba65ec33d14799c4a00ed2048148ebf3ca" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.195700 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.209422 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:23 crc kubenswrapper[4847]: E0218 00:50:23.209967 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-notification-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.209989 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-notification-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: E0218 00:50:23.210004 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="proxy-httpd" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210012 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="proxy-httpd" Feb 18 00:50:23 crc kubenswrapper[4847]: E0218 00:50:23.210031 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-central-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210042 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-central-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: E0218 00:50:23.210061 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="sg-core" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210070 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="sg-core" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210368 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-central-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210392 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="sg-core" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210411 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="ceilometer-notification-agent" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.210425 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" containerName="proxy-httpd" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.220869 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.224123 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.224431 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.226453 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.229319 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.243479 4847 scope.go:117] "RemoveContainer" containerID="f5d4f54d55100799d054e69a181b629fcbd5eed99c3669d4db498dad071384d7" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.283450 4847 scope.go:117] "RemoveContainer" containerID="e87fa9c7984bc0d5fceb700f4508698b9309b96bf4ef3ae01e22886c7b94c0c4" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.308805 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.308881 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.308954 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.309070 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.309231 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7hcs\" (UniqueName: \"kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.309464 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.309490 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.309738 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411581 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411658 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411705 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411758 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411802 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411860 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.411914 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7hcs\" (UniqueName: \"kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.413337 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.413459 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.417790 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.418411 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.418409 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.418861 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.419835 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.435824 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2a4151-c745-47c2-bb78-19325cff2a61" path="/var/lib/kubelet/pods/ab2a4151-c745-47c2-bb78-19325cff2a61/volumes" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.440196 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7hcs\" (UniqueName: \"kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs\") pod \"ceilometer-0\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.491584 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.491835 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.491924 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.492767 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.492901 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" gracePeriod=600 Feb 18 00:50:23 crc kubenswrapper[4847]: I0218 00:50:23.556133 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:23 crc kubenswrapper[4847]: E0218 00:50:23.624902 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.040447 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.049435 4847 generic.go:334] "Generic (PLEG): container finished" podID="f76974c5-d87a-4a52-ac85-364597594818" containerID="8cbbea99e9e673e1548ce862f3993b8cfea9bf43ae00947a4fcc8f6bc37891ba" exitCode=143 Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.049525 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerDied","Data":"8cbbea99e9e673e1548ce862f3993b8cfea9bf43ae00947a4fcc8f6bc37891ba"} Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.055888 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" exitCode=0 Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.055959 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d"} Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.056025 4847 scope.go:117] "RemoveContainer" containerID="270eacc836d3834cb6726d9cae5de99162027296d57351176eedc46878735764" Feb 18 00:50:24 crc kubenswrapper[4847]: W0218 00:50:24.056125 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0577c3f_a57d_4691_861b_3107614b86bc.slice/crio-af83b5ae588b4c5af0e030244419851562664a562f959ff087c1d1e578f7363d WatchSource:0}: Error finding container af83b5ae588b4c5af0e030244419851562664a562f959ff087c1d1e578f7363d: Status 404 returned error can't find the container with id af83b5ae588b4c5af0e030244419851562664a562f959ff087c1d1e578f7363d Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.056682 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:50:24 crc kubenswrapper[4847]: E0218 00:50:24.056905 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:50:24 crc kubenswrapper[4847]: I0218 00:50:24.504170 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:25 crc kubenswrapper[4847]: I0218 00:50:25.071314 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerStarted","Data":"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a"} Feb 18 00:50:25 crc kubenswrapper[4847]: I0218 00:50:25.071621 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerStarted","Data":"af83b5ae588b4c5af0e030244419851562664a562f959ff087c1d1e578f7363d"} Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.083804 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerStarted","Data":"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc"} Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.263298 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-vpbzx"] Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.265083 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.279776 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vpbzx"] Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.379243 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.379646 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7z45\" (UniqueName: \"kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.397951 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4c4c-account-create-update-mcbxg"] Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.399376 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.402907 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.417851 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4c4c-account-create-update-mcbxg"] Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.481801 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvfq7\" (UniqueName: \"kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.481984 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.482058 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.482196 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7z45\" (UniqueName: \"kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.483469 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.503219 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7z45\" (UniqueName: \"kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45\") pod \"aodh-db-create-vpbzx\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.584826 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvfq7\" (UniqueName: \"kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.584917 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.585734 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.589220 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.628318 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvfq7\" (UniqueName: \"kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7\") pod \"aodh-4c4c-account-create-update-mcbxg\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:26 crc kubenswrapper[4847]: I0218 00:50:26.793106 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.120299 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerStarted","Data":"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b"} Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.149041 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-vpbzx"] Feb 18 00:50:27 crc kubenswrapper[4847]: W0218 00:50:27.153253 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod701b730c_8421_410f_a849_24f8a092e781.slice/crio-61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91 WatchSource:0}: Error finding container 61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91: Status 404 returned error can't find the container with id 61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91 Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.222655 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.309645 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.310428 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.315762 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4c4c-account-create-update-mcbxg"] Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.359117 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.359160 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.431145 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.544542 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.544593 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.585822 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.679950 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:50:27 crc kubenswrapper[4847]: I0218 00:50:27.680289 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="dnsmasq-dns" containerID="cri-o://d8e20d646a76c0687b92520cc134cc503412bb9bf8e42234dc22777cec414cc8" gracePeriod=10 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.144847 4847 generic.go:334] "Generic (PLEG): container finished" podID="bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" containerID="f014f3026c432a472e7ca049c99f28171b61b3457e422bd94f2e45b059f3f8da" exitCode=0 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.145324 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sbsm7" event={"ID":"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699","Type":"ContainerDied","Data":"f014f3026c432a472e7ca049c99f28171b61b3457e422bd94f2e45b059f3f8da"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.172266 4847 generic.go:334] "Generic (PLEG): container finished" podID="3d9912ea-2aab-435b-b8fc-d418d07085ce" containerID="1851255816afd251bc7e544dbc1c8ca3be8d3d1e314706a172dccd60c97909d9" exitCode=0 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.172370 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4c4c-account-create-update-mcbxg" event={"ID":"3d9912ea-2aab-435b-b8fc-d418d07085ce","Type":"ContainerDied","Data":"1851255816afd251bc7e544dbc1c8ca3be8d3d1e314706a172dccd60c97909d9"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.172397 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4c4c-account-create-update-mcbxg" event={"ID":"3d9912ea-2aab-435b-b8fc-d418d07085ce","Type":"ContainerStarted","Data":"cce10c6cbdc60ae9fb39e60e005688fc223b19aa5c018723fc93e19a2ebedf6f"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.180530 4847 generic.go:334] "Generic (PLEG): container finished" podID="701b730c-8421-410f-a849-24f8a092e781" containerID="e9e0064a6dccfbf9dea70b23131ca52130b41e267edeac7fc5a6399ef999c370" exitCode=0 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.180714 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vpbzx" event={"ID":"701b730c-8421-410f-a849-24f8a092e781","Type":"ContainerDied","Data":"e9e0064a6dccfbf9dea70b23131ca52130b41e267edeac7fc5a6399ef999c370"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.180746 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vpbzx" event={"ID":"701b730c-8421-410f-a849-24f8a092e781","Type":"ContainerStarted","Data":"61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.196991 4847 generic.go:334] "Generic (PLEG): container finished" podID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerID="d8e20d646a76c0687b92520cc134cc503412bb9bf8e42234dc22777cec414cc8" exitCode=0 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.197050 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" event={"ID":"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8","Type":"ContainerDied","Data":"d8e20d646a76c0687b92520cc134cc503412bb9bf8e42234dc22777cec414cc8"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202159 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-central-agent" containerID="cri-o://1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a" gracePeriod=30 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202367 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerStarted","Data":"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977"} Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202403 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202438 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="proxy-httpd" containerID="cri-o://d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977" gracePeriod=30 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202478 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="sg-core" containerID="cri-o://ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b" gracePeriod=30 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.202517 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-notification-agent" containerID="cri-o://8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc" gracePeriod=30 Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.253562 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.254366 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.532740497 podStartE2EDuration="5.254341839s" podCreationTimestamp="2026-02-18 00:50:23 +0000 UTC" firstStartedPulling="2026-02-18 00:50:24.060565156 +0000 UTC m=+1497.437916098" lastFinishedPulling="2026-02-18 00:50:27.782166498 +0000 UTC m=+1501.159517440" observedRunningTime="2026-02-18 00:50:28.233880446 +0000 UTC m=+1501.611231388" watchObservedRunningTime="2026-02-18 00:50:28.254341839 +0000 UTC m=+1501.631692781" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.259587 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.347929 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqx47\" (UniqueName: \"kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.348004 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.348111 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.348254 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.348587 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.348645 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb\") pod \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\" (UID: \"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8\") " Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.352108 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.352143 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.224:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.365894 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47" (OuterVolumeSpecName: "kube-api-access-gqx47") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "kube-api-access-gqx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.409922 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config" (OuterVolumeSpecName: "config") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.430115 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.449711 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.451203 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.451237 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqx47\" (UniqueName: \"kubernetes.io/projected/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-kube-api-access-gqx47\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.451250 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.451261 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.452626 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.475353 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" (UID: "d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.553616 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:28 crc kubenswrapper[4847]: I0218 00:50:28.553654 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.216045 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0577c3f-a57d-4691-861b-3107614b86bc" containerID="ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b" exitCode=2 Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.216518 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0577c3f-a57d-4691-861b-3107614b86bc" containerID="8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc" exitCode=0 Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.216124 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerDied","Data":"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b"} Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.216669 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerDied","Data":"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc"} Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.218376 4847 generic.go:334] "Generic (PLEG): container finished" podID="c41b6174-4c4e-48d6-b094-0af6d3781553" containerID="fa1c63c3075a1f894f94a3315b6b537ea1c25aabee71edc000ce5baa1aa47a48" exitCode=0 Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.218445 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" event={"ID":"c41b6174-4c4e-48d6-b094-0af6d3781553","Type":"ContainerDied","Data":"fa1c63c3075a1f894f94a3315b6b537ea1c25aabee71edc000ce5baa1aa47a48"} Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.220403 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.220460 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-sktjc" event={"ID":"d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8","Type":"ContainerDied","Data":"6cbc4bba92ec5be154bdc8d2f04f7b24b30c3745230592a998b6409a4e486610"} Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.220529 4847 scope.go:117] "RemoveContainer" containerID="d8e20d646a76c0687b92520cc134cc503412bb9bf8e42234dc22777cec414cc8" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.289272 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.302089 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-sktjc"] Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.304235 4847 scope.go:117] "RemoveContainer" containerID="dc0230118c6b93a18a14c9cdeb9bb9b77f11d1084f4281c68b6ddfdb22b82bca" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.419904 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" path="/var/lib/kubelet/pods/d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8/volumes" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.826610 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.888044 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts\") pod \"3d9912ea-2aab-435b-b8fc-d418d07085ce\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.888270 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvfq7\" (UniqueName: \"kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7\") pod \"3d9912ea-2aab-435b-b8fc-d418d07085ce\" (UID: \"3d9912ea-2aab-435b-b8fc-d418d07085ce\") " Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.889035 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d9912ea-2aab-435b-b8fc-d418d07085ce" (UID: "3d9912ea-2aab-435b-b8fc-d418d07085ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.896276 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7" (OuterVolumeSpecName: "kube-api-access-wvfq7") pod "3d9912ea-2aab-435b-b8fc-d418d07085ce" (UID: "3d9912ea-2aab-435b-b8fc-d418d07085ce"). InnerVolumeSpecName "kube-api-access-wvfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.967345 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.968318 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.993231 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d9912ea-2aab-435b-b8fc-d418d07085ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:29 crc kubenswrapper[4847]: I0218 00:50:29.993275 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvfq7\" (UniqueName: \"kubernetes.io/projected/3d9912ea-2aab-435b-b8fc-d418d07085ce-kube-api-access-wvfq7\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.095043 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle\") pod \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.095540 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lrjk\" (UniqueName: \"kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk\") pod \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.095718 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts\") pod \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.096373 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts\") pod \"701b730c-8421-410f-a849-24f8a092e781\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.096666 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7z45\" (UniqueName: \"kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45\") pod \"701b730c-8421-410f-a849-24f8a092e781\" (UID: \"701b730c-8421-410f-a849-24f8a092e781\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.096795 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data\") pod \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\" (UID: \"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.096931 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "701b730c-8421-410f-a849-24f8a092e781" (UID: "701b730c-8421-410f-a849-24f8a092e781"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.097514 4847 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/701b730c-8421-410f-a849-24f8a092e781-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.101527 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk" (OuterVolumeSpecName: "kube-api-access-2lrjk") pod "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" (UID: "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699"). InnerVolumeSpecName "kube-api-access-2lrjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.105030 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45" (OuterVolumeSpecName: "kube-api-access-q7z45") pod "701b730c-8421-410f-a849-24f8a092e781" (UID: "701b730c-8421-410f-a849-24f8a092e781"). InnerVolumeSpecName "kube-api-access-q7z45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.119105 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts" (OuterVolumeSpecName: "scripts") pod "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" (UID: "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.131152 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" (UID: "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.134500 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data" (OuterVolumeSpecName: "config-data") pod "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" (UID: "bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.208803 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7z45\" (UniqueName: \"kubernetes.io/projected/701b730c-8421-410f-a849-24f8a092e781-kube-api-access-q7z45\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.208868 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.208896 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.208928 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lrjk\" (UniqueName: \"kubernetes.io/projected/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-kube-api-access-2lrjk\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.208951 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.230419 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-sbsm7" event={"ID":"bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699","Type":"ContainerDied","Data":"a3ede99bc1f5655a2c3d386c787fb249b36b5460b93d1b32e610fe7bbfbd580c"} Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.230461 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ede99bc1f5655a2c3d386c787fb249b36b5460b93d1b32e610fe7bbfbd580c" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.230516 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-sbsm7" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.233416 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4c4c-account-create-update-mcbxg" event={"ID":"3d9912ea-2aab-435b-b8fc-d418d07085ce","Type":"ContainerDied","Data":"cce10c6cbdc60ae9fb39e60e005688fc223b19aa5c018723fc93e19a2ebedf6f"} Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.233457 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce10c6cbdc60ae9fb39e60e005688fc223b19aa5c018723fc93e19a2ebedf6f" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.233556 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4c4c-account-create-update-mcbxg" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.237222 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-vpbzx" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.238754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-vpbzx" event={"ID":"701b730c-8421-410f-a849-24f8a092e781","Type":"ContainerDied","Data":"61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91"} Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.239570 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f7df219da486e2ac9050124c7af1a0ef77cda35e07c6b5028a50fc870a7d91" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.347855 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.348102 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-log" containerID="cri-o://912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2" gracePeriod=30 Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.348245 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-api" containerID="cri-o://6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e" gracePeriod=30 Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.381588 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.381912 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8f64ddc2-419d-4a08-8418-d19033c2b549" containerName="nova-scheduler-scheduler" containerID="cri-o://cb7dbbb06a74dd6b04dbb76a2355b6cadda06a1117847475a3f9c4eb42874791" gracePeriod=30 Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.593161 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.728481 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle\") pod \"c41b6174-4c4e-48d6-b094-0af6d3781553\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.728989 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-559sp\" (UniqueName: \"kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp\") pod \"c41b6174-4c4e-48d6-b094-0af6d3781553\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.729017 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts\") pod \"c41b6174-4c4e-48d6-b094-0af6d3781553\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.729538 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data\") pod \"c41b6174-4c4e-48d6-b094-0af6d3781553\" (UID: \"c41b6174-4c4e-48d6-b094-0af6d3781553\") " Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.735749 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts" (OuterVolumeSpecName: "scripts") pod "c41b6174-4c4e-48d6-b094-0af6d3781553" (UID: "c41b6174-4c4e-48d6-b094-0af6d3781553"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.735807 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp" (OuterVolumeSpecName: "kube-api-access-559sp") pod "c41b6174-4c4e-48d6-b094-0af6d3781553" (UID: "c41b6174-4c4e-48d6-b094-0af6d3781553"). InnerVolumeSpecName "kube-api-access-559sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.759932 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data" (OuterVolumeSpecName: "config-data") pod "c41b6174-4c4e-48d6-b094-0af6d3781553" (UID: "c41b6174-4c4e-48d6-b094-0af6d3781553"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.771817 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c41b6174-4c4e-48d6-b094-0af6d3781553" (UID: "c41b6174-4c4e-48d6-b094-0af6d3781553"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.832253 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-559sp\" (UniqueName: \"kubernetes.io/projected/c41b6174-4c4e-48d6-b094-0af6d3781553-kube-api-access-559sp\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.832290 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.832300 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:30 crc kubenswrapper[4847]: I0218 00:50:30.832308 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41b6174-4c4e-48d6-b094-0af6d3781553-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.255009 4847 generic.go:334] "Generic (PLEG): container finished" podID="8f64ddc2-419d-4a08-8418-d19033c2b549" containerID="cb7dbbb06a74dd6b04dbb76a2355b6cadda06a1117847475a3f9c4eb42874791" exitCode=0 Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.255106 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f64ddc2-419d-4a08-8418-d19033c2b549","Type":"ContainerDied","Data":"cb7dbbb06a74dd6b04dbb76a2355b6cadda06a1117847475a3f9c4eb42874791"} Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.258012 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.258041 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-6bzdl" event={"ID":"c41b6174-4c4e-48d6-b094-0af6d3781553","Type":"ContainerDied","Data":"cd22abb1e66b45c998f8d1443b81af183c426adae5c7cdf1b46699da7cf2398f"} Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.258321 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd22abb1e66b45c998f8d1443b81af183c426adae5c7cdf1b46699da7cf2398f" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.264955 4847 generic.go:334] "Generic (PLEG): container finished" podID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerID="912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2" exitCode=143 Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.265015 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerDied","Data":"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2"} Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.320854 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321357 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" containerName="nova-manage" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321375 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" containerName="nova-manage" Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321390 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c41b6174-4c4e-48d6-b094-0af6d3781553" containerName="nova-cell1-conductor-db-sync" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321397 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c41b6174-4c4e-48d6-b094-0af6d3781553" containerName="nova-cell1-conductor-db-sync" Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321412 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="init" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321419 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="init" Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321431 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="701b730c-8421-410f-a849-24f8a092e781" containerName="mariadb-database-create" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321437 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="701b730c-8421-410f-a849-24f8a092e781" containerName="mariadb-database-create" Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321444 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="dnsmasq-dns" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321450 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="dnsmasq-dns" Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.321475 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9912ea-2aab-435b-b8fc-d418d07085ce" containerName="mariadb-account-create-update" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321481 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9912ea-2aab-435b-b8fc-d418d07085ce" containerName="mariadb-account-create-update" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321679 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="701b730c-8421-410f-a849-24f8a092e781" containerName="mariadb-database-create" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321690 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9912ea-2aab-435b-b8fc-d418d07085ce" containerName="mariadb-account-create-update" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321699 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c41b6174-4c4e-48d6-b094-0af6d3781553" containerName="nova-cell1-conductor-db-sync" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321719 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07eaadd-7b7d-4ccc-9ef3-ad7bdd2524b8" containerName="dnsmasq-dns" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.321730 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" containerName="nova-manage" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.322533 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.326201 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.338309 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.447449 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.456017 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpzp\" (UniqueName: \"kubernetes.io/projected/9e330065-0783-4200-8af0-e726b820aa6d-kube-api-access-wfpzp\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.456081 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.456455 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.557363 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle\") pod \"8f64ddc2-419d-4a08-8418-d19033c2b549\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.557471 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data\") pod \"8f64ddc2-419d-4a08-8418-d19033c2b549\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.557632 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz9b9\" (UniqueName: \"kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9\") pod \"8f64ddc2-419d-4a08-8418-d19033c2b549\" (UID: \"8f64ddc2-419d-4a08-8418-d19033c2b549\") " Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.558035 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.558127 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfpzp\" (UniqueName: \"kubernetes.io/projected/9e330065-0783-4200-8af0-e726b820aa6d-kube-api-access-wfpzp\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.558173 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.563887 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9" (OuterVolumeSpecName: "kube-api-access-jz9b9") pod "8f64ddc2-419d-4a08-8418-d19033c2b549" (UID: "8f64ddc2-419d-4a08-8418-d19033c2b549"). InnerVolumeSpecName "kube-api-access-jz9b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.564705 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.565570 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e330065-0783-4200-8af0-e726b820aa6d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.582376 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfpzp\" (UniqueName: \"kubernetes.io/projected/9e330065-0783-4200-8af0-e726b820aa6d-kube-api-access-wfpzp\") pod \"nova-cell1-conductor-0\" (UID: \"9e330065-0783-4200-8af0-e726b820aa6d\") " pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.596353 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data" (OuterVolumeSpecName: "config-data") pod "8f64ddc2-419d-4a08-8418-d19033c2b549" (UID: "8f64ddc2-419d-4a08-8418-d19033c2b549"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.625309 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f64ddc2-419d-4a08-8418-d19033c2b549" (UID: "8f64ddc2-419d-4a08-8418-d19033c2b549"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.638513 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.660521 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz9b9\" (UniqueName: \"kubernetes.io/projected/8f64ddc2-419d-4a08-8418-d19033c2b549-kube-api-access-jz9b9\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.660556 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.660569 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f64ddc2-419d-4a08-8418-d19033c2b549-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.720791 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-pzwv2"] Feb 18 00:50:31 crc kubenswrapper[4847]: E0218 00:50:31.721217 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f64ddc2-419d-4a08-8418-d19033c2b549" containerName="nova-scheduler-scheduler" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.721234 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f64ddc2-419d-4a08-8418-d19033c2b549" containerName="nova-scheduler-scheduler" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.721442 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f64ddc2-419d-4a08-8418-d19033c2b549" containerName="nova-scheduler-scheduler" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.722147 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.726262 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.726699 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9sw76" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.726872 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.727268 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.747873 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pzwv2"] Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.864631 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.865096 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.865155 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcwwn\" (UniqueName: \"kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.865255 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.967614 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.967718 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.967765 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcwwn\" (UniqueName: \"kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.967838 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.972384 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.973687 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.974196 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:31 crc kubenswrapper[4847]: I0218 00:50:31.986012 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcwwn\" (UniqueName: \"kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn\") pod \"aodh-db-sync-pzwv2\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.112259 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.168980 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 18 00:50:32 crc kubenswrapper[4847]: W0218 00:50:32.171580 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e330065_0783_4200_8af0_e726b820aa6d.slice/crio-474f360ca3f1fcc3bc08e0ad0be8db067bc6521eee6a0af2341f3928375da8e9 WatchSource:0}: Error finding container 474f360ca3f1fcc3bc08e0ad0be8db067bc6521eee6a0af2341f3928375da8e9: Status 404 returned error can't find the container with id 474f360ca3f1fcc3bc08e0ad0be8db067bc6521eee6a0af2341f3928375da8e9 Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.276460 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8f64ddc2-419d-4a08-8418-d19033c2b549","Type":"ContainerDied","Data":"c8cb75e6ec22dc35e13ab67aa1b550a1ba771b302c8c38cd4555a483bfe112c3"} Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.276885 4847 scope.go:117] "RemoveContainer" containerID="cb7dbbb06a74dd6b04dbb76a2355b6cadda06a1117847475a3f9c4eb42874791" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.276736 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.289019 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9e330065-0783-4200-8af0-e726b820aa6d","Type":"ContainerStarted","Data":"474f360ca3f1fcc3bc08e0ad0be8db067bc6521eee6a0af2341f3928375da8e9"} Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.355508 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.375393 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.390755 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.394892 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.397146 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.408925 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.482042 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45d5h\" (UniqueName: \"kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.482099 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.482145 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.584271 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45d5h\" (UniqueName: \"kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.584352 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.584405 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.594543 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.610391 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45d5h\" (UniqueName: \"kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.611898 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data\") pod \"nova-scheduler-0\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " pod="openstack/nova-scheduler-0" Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.623947 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-pzwv2"] Feb 18 00:50:32 crc kubenswrapper[4847]: I0218 00:50:32.725088 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.285013 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.314192 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e90c4a6-c9f2-4487-ab03-f98ce10417bd","Type":"ContainerStarted","Data":"98f5c3309b6de687bff35785dd0af58d11cc11427ebd0676d13875fc52112799"} Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.315308 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pzwv2" event={"ID":"fa59fc3a-ea9b-45bb-a190-1844834093e9","Type":"ContainerStarted","Data":"0b64cc80c3b26d33171a9b245d8b4b724036b838d91aafe78ba02c26240475f7"} Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.319705 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0577c3f-a57d-4691-861b-3107614b86bc" containerID="1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a" exitCode=0 Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.319751 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerDied","Data":"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a"} Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.326964 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9e330065-0783-4200-8af0-e726b820aa6d","Type":"ContainerStarted","Data":"df5eba8f8725e77f1cfc3e0f07498eebbe10b4e5765f2161df2a8d8fd97d32e1"} Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.328323 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.349543 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.3495218270000002 podStartE2EDuration="2.349521827s" podCreationTimestamp="2026-02-18 00:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:33.34505892 +0000 UTC m=+1506.722409862" watchObservedRunningTime="2026-02-18 00:50:33.349521827 +0000 UTC m=+1506.726872769" Feb 18 00:50:33 crc kubenswrapper[4847]: I0218 00:50:33.417799 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f64ddc2-419d-4a08-8418-d19033c2b549" path="/var/lib/kubelet/pods/8f64ddc2-419d-4a08-8418-d19033c2b549/volumes" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.019293 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.126049 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbxtg\" (UniqueName: \"kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg\") pod \"24af341d-fcce-475e-95b2-ddd3c8d30114\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.126646 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data\") pod \"24af341d-fcce-475e-95b2-ddd3c8d30114\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.127636 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle\") pod \"24af341d-fcce-475e-95b2-ddd3c8d30114\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.127685 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs\") pod \"24af341d-fcce-475e-95b2-ddd3c8d30114\" (UID: \"24af341d-fcce-475e-95b2-ddd3c8d30114\") " Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.128447 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs" (OuterVolumeSpecName: "logs") pod "24af341d-fcce-475e-95b2-ddd3c8d30114" (UID: "24af341d-fcce-475e-95b2-ddd3c8d30114"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.129035 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24af341d-fcce-475e-95b2-ddd3c8d30114-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.133222 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg" (OuterVolumeSpecName: "kube-api-access-lbxtg") pod "24af341d-fcce-475e-95b2-ddd3c8d30114" (UID: "24af341d-fcce-475e-95b2-ddd3c8d30114"). InnerVolumeSpecName "kube-api-access-lbxtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.159452 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24af341d-fcce-475e-95b2-ddd3c8d30114" (UID: "24af341d-fcce-475e-95b2-ddd3c8d30114"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.175170 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data" (OuterVolumeSpecName: "config-data") pod "24af341d-fcce-475e-95b2-ddd3c8d30114" (UID: "24af341d-fcce-475e-95b2-ddd3c8d30114"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.230910 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbxtg\" (UniqueName: \"kubernetes.io/projected/24af341d-fcce-475e-95b2-ddd3c8d30114-kube-api-access-lbxtg\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.230937 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.230948 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24af341d-fcce-475e-95b2-ddd3c8d30114-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.351713 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e90c4a6-c9f2-4487-ab03-f98ce10417bd","Type":"ContainerStarted","Data":"a953a1d8b5fad80f99851cb6e362a10662d6f7e5c265d5247a60e999b411950a"} Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.354020 4847 generic.go:334] "Generic (PLEG): container finished" podID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerID="6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e" exitCode=0 Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.354080 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.354111 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerDied","Data":"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e"} Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.354159 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24af341d-fcce-475e-95b2-ddd3c8d30114","Type":"ContainerDied","Data":"51c98bc76fec7e94f33a7f36023178c266511fd173177ae7b0efc06b1eaf5805"} Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.354180 4847 scope.go:117] "RemoveContainer" containerID="6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.381429 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.381402369 podStartE2EDuration="2.381402369s" podCreationTimestamp="2026-02-18 00:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:34.373335845 +0000 UTC m=+1507.750686807" watchObservedRunningTime="2026-02-18 00:50:34.381402369 +0000 UTC m=+1507.758753321" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.390789 4847 scope.go:117] "RemoveContainer" containerID="912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.403994 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.427851 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.441316 4847 scope.go:117] "RemoveContainer" containerID="6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.441440 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:34 crc kubenswrapper[4847]: E0218 00:50:34.442130 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-api" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.442195 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-api" Feb 18 00:50:34 crc kubenswrapper[4847]: E0218 00:50:34.442283 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-log" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.442331 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-log" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.442611 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-log" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.442677 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" containerName="nova-api-api" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.443906 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: E0218 00:50:34.444105 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e\": container with ID starting with 6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e not found: ID does not exist" containerID="6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.444261 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e"} err="failed to get container status \"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e\": rpc error: code = NotFound desc = could not find container \"6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e\": container with ID starting with 6e2cc041f47cdc6c6177499b58cbb653456046f8dcd6ec586466af3eac5e847e not found: ID does not exist" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.444290 4847 scope.go:117] "RemoveContainer" containerID="912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2" Feb 18 00:50:34 crc kubenswrapper[4847]: E0218 00:50:34.446536 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2\": container with ID starting with 912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2 not found: ID does not exist" containerID="912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.446566 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2"} err="failed to get container status \"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2\": rpc error: code = NotFound desc = could not find container \"912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2\": container with ID starting with 912a5aa02801268fce64bdc934c9801d20a337511ab7cf8be2ae7c1bba9b57c2 not found: ID does not exist" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.448171 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.448483 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.553473 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.553595 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.553680 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z9m\" (UniqueName: \"kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.554086 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.658151 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.658246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4z9m\" (UniqueName: \"kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.658271 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.658354 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.659574 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.662027 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.662311 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.675710 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4z9m\" (UniqueName: \"kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m\") pod \"nova-api-0\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " pod="openstack/nova-api-0" Feb 18 00:50:34 crc kubenswrapper[4847]: I0218 00:50:34.766779 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:50:35 crc kubenswrapper[4847]: I0218 00:50:35.238869 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:35 crc kubenswrapper[4847]: I0218 00:50:35.418269 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24af341d-fcce-475e-95b2-ddd3c8d30114" path="/var/lib/kubelet/pods/24af341d-fcce-475e-95b2-ddd3c8d30114/volumes" Feb 18 00:50:36 crc kubenswrapper[4847]: I0218 00:50:36.403975 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:50:36 crc kubenswrapper[4847]: E0218 00:50:36.404252 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:50:37 crc kubenswrapper[4847]: I0218 00:50:37.389446 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerStarted","Data":"a0daadb055480fddbfe0b05962be57bb076eac0a7fdd899040e06abe7cc41746"} Feb 18 00:50:37 crc kubenswrapper[4847]: I0218 00:50:37.725829 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:50:38 crc kubenswrapper[4847]: I0218 00:50:38.399915 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerStarted","Data":"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7"} Feb 18 00:50:38 crc kubenswrapper[4847]: I0218 00:50:38.400204 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerStarted","Data":"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec"} Feb 18 00:50:38 crc kubenswrapper[4847]: I0218 00:50:38.402116 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pzwv2" event={"ID":"fa59fc3a-ea9b-45bb-a190-1844834093e9","Type":"ContainerStarted","Data":"42d62458a85e69b80e6ca971c691a9e4ea5105d3707936cf4ef10043759fb314"} Feb 18 00:50:38 crc kubenswrapper[4847]: I0218 00:50:38.426685 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.426665992 podStartE2EDuration="4.426665992s" podCreationTimestamp="2026-02-18 00:50:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:38.417632254 +0000 UTC m=+1511.794983196" watchObservedRunningTime="2026-02-18 00:50:38.426665992 +0000 UTC m=+1511.804016944" Feb 18 00:50:38 crc kubenswrapper[4847]: I0218 00:50:38.450259 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-pzwv2" podStartSLOduration=2.7481619459999997 podStartE2EDuration="7.45023895s" podCreationTimestamp="2026-02-18 00:50:31 +0000 UTC" firstStartedPulling="2026-02-18 00:50:32.63605284 +0000 UTC m=+1506.013403782" lastFinishedPulling="2026-02-18 00:50:37.338129814 +0000 UTC m=+1510.715480786" observedRunningTime="2026-02-18 00:50:38.43736294 +0000 UTC m=+1511.814713892" watchObservedRunningTime="2026-02-18 00:50:38.45023895 +0000 UTC m=+1511.827589892" Feb 18 00:50:40 crc kubenswrapper[4847]: I0218 00:50:40.436998 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pzwv2" event={"ID":"fa59fc3a-ea9b-45bb-a190-1844834093e9","Type":"ContainerDied","Data":"42d62458a85e69b80e6ca971c691a9e4ea5105d3707936cf4ef10043759fb314"} Feb 18 00:50:40 crc kubenswrapper[4847]: I0218 00:50:40.436926 4847 generic.go:334] "Generic (PLEG): container finished" podID="fa59fc3a-ea9b-45bb-a190-1844834093e9" containerID="42d62458a85e69b80e6ca971c691a9e4ea5105d3707936cf4ef10043759fb314" exitCode=0 Feb 18 00:50:41 crc kubenswrapper[4847]: I0218 00:50:41.674933 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 18 00:50:41 crc kubenswrapper[4847]: I0218 00:50:41.995877 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.138559 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data\") pod \"fa59fc3a-ea9b-45bb-a190-1844834093e9\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.139042 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle\") pod \"fa59fc3a-ea9b-45bb-a190-1844834093e9\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.139083 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts\") pod \"fa59fc3a-ea9b-45bb-a190-1844834093e9\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.139113 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcwwn\" (UniqueName: \"kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn\") pod \"fa59fc3a-ea9b-45bb-a190-1844834093e9\" (UID: \"fa59fc3a-ea9b-45bb-a190-1844834093e9\") " Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.146103 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts" (OuterVolumeSpecName: "scripts") pod "fa59fc3a-ea9b-45bb-a190-1844834093e9" (UID: "fa59fc3a-ea9b-45bb-a190-1844834093e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.146671 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn" (OuterVolumeSpecName: "kube-api-access-gcwwn") pod "fa59fc3a-ea9b-45bb-a190-1844834093e9" (UID: "fa59fc3a-ea9b-45bb-a190-1844834093e9"). InnerVolumeSpecName "kube-api-access-gcwwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.181587 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa59fc3a-ea9b-45bb-a190-1844834093e9" (UID: "fa59fc3a-ea9b-45bb-a190-1844834093e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.189470 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data" (OuterVolumeSpecName: "config-data") pod "fa59fc3a-ea9b-45bb-a190-1844834093e9" (UID: "fa59fc3a-ea9b-45bb-a190-1844834093e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.241853 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.241888 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.241900 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa59fc3a-ea9b-45bb-a190-1844834093e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.241911 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcwwn\" (UniqueName: \"kubernetes.io/projected/fa59fc3a-ea9b-45bb-a190-1844834093e9-kube-api-access-gcwwn\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.464742 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-pzwv2" event={"ID":"fa59fc3a-ea9b-45bb-a190-1844834093e9","Type":"ContainerDied","Data":"0b64cc80c3b26d33171a9b245d8b4b724036b838d91aafe78ba02c26240475f7"} Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.464802 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b64cc80c3b26d33171a9b245d8b4b724036b838d91aafe78ba02c26240475f7" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.464817 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-pzwv2" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.725373 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:50:42 crc kubenswrapper[4847]: I0218 00:50:42.765760 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:50:43 crc kubenswrapper[4847]: I0218 00:50:43.515456 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:50:44 crc kubenswrapper[4847]: I0218 00:50:44.767466 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:50:44 crc kubenswrapper[4847]: I0218 00:50:44.767545 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:50:45 crc kubenswrapper[4847]: I0218 00:50:45.849793 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:50:45 crc kubenswrapper[4847]: I0218 00:50:45.849858 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.236:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.348497 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 00:50:46 crc kubenswrapper[4847]: E0218 00:50:46.349588 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa59fc3a-ea9b-45bb-a190-1844834093e9" containerName="aodh-db-sync" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.349651 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa59fc3a-ea9b-45bb-a190-1844834093e9" containerName="aodh-db-sync" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.350071 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa59fc3a-ea9b-45bb-a190-1844834093e9" containerName="aodh-db-sync" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.355563 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.357381 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.358116 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9sw76" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.358178 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.359556 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.448624 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.448968 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.449115 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.449232 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffxs\" (UniqueName: \"kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.550993 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sffxs\" (UniqueName: \"kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.551072 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.551167 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.551339 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.562708 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.564128 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.577031 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.578148 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sffxs\" (UniqueName: \"kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs\") pod \"aodh-0\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " pod="openstack/aodh-0" Feb 18 00:50:46 crc kubenswrapper[4847]: I0218 00:50:46.682926 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:50:47 crc kubenswrapper[4847]: I0218 00:50:47.231883 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 00:50:47 crc kubenswrapper[4847]: W0218 00:50:47.238211 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c32cae7_3099_475a_b844_0c4b66a5f4ff.slice/crio-091a063525e3d4be8e94175caa516ee092ad86a224bf7be26b8217cdae6c0254 WatchSource:0}: Error finding container 091a063525e3d4be8e94175caa516ee092ad86a224bf7be26b8217cdae6c0254: Status 404 returned error can't find the container with id 091a063525e3d4be8e94175caa516ee092ad86a224bf7be26b8217cdae6c0254 Feb 18 00:50:47 crc kubenswrapper[4847]: I0218 00:50:47.533629 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerStarted","Data":"091a063525e3d4be8e94175caa516ee092ad86a224bf7be26b8217cdae6c0254"} Feb 18 00:50:48 crc kubenswrapper[4847]: I0218 00:50:48.553115 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerStarted","Data":"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f"} Feb 18 00:50:49 crc kubenswrapper[4847]: I0218 00:50:49.404081 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:50:49 crc kubenswrapper[4847]: E0218 00:50:49.404805 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:50:49 crc kubenswrapper[4847]: I0218 00:50:49.641544 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 00:50:50 crc kubenswrapper[4847]: I0218 00:50:50.594991 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerStarted","Data":"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5"} Feb 18 00:50:51 crc kubenswrapper[4847]: I0218 00:50:51.608536 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerStarted","Data":"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88"} Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.664240 4847 generic.go:334] "Generic (PLEG): container finished" podID="f76974c5-d87a-4a52-ac85-364597594818" containerID="2fda87e2d268beee4b519656d4738f502c2059cc5df0e971986a493d52ab56c2" exitCode=137 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.664714 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerDied","Data":"2fda87e2d268beee4b519656d4738f502c2059cc5df0e971986a493d52ab56c2"} Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.664770 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f76974c5-d87a-4a52-ac85-364597594818","Type":"ContainerDied","Data":"70fe711aa3af5aaa34ba0dabeea926811e27bffb349d3f592bb8cf595340d653"} Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.664782 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70fe711aa3af5aaa34ba0dabeea926811e27bffb349d3f592bb8cf595340d653" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.684890 4847 generic.go:334] "Generic (PLEG): container finished" podID="d54442ab-bec7-429c-ae47-6c781844eb4b" containerID="7ec4b758586d9607221da99ec8f3966dd12b11114148d7700282698bfce92415" exitCode=137 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.685220 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d54442ab-bec7-429c-ae47-6c781844eb4b","Type":"ContainerDied","Data":"7ec4b758586d9607221da99ec8f3966dd12b11114148d7700282698bfce92415"} Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.717718 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerStarted","Data":"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800"} Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.717880 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-api" containerID="cri-o://2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f" gracePeriod=30 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.718401 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-listener" containerID="cri-o://6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800" gracePeriod=30 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.718451 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-notifier" containerID="cri-o://d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88" gracePeriod=30 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.718486 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-evaluator" containerID="cri-o://1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5" gracePeriod=30 Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.736911 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.765680 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.776569 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.5321288429999997 podStartE2EDuration="7.77654978s" podCreationTimestamp="2026-02-18 00:50:46 +0000 UTC" firstStartedPulling="2026-02-18 00:50:47.241745041 +0000 UTC m=+1520.619095983" lastFinishedPulling="2026-02-18 00:50:52.486165978 +0000 UTC m=+1525.863516920" observedRunningTime="2026-02-18 00:50:53.751373973 +0000 UTC m=+1527.128724915" watchObservedRunningTime="2026-02-18 00:50:53.77654978 +0000 UTC m=+1527.153900722" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.829799 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs\") pod \"f76974c5-d87a-4a52-ac85-364597594818\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.829949 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data\") pod \"f76974c5-d87a-4a52-ac85-364597594818\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.829983 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k96hf\" (UniqueName: \"kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf\") pod \"f76974c5-d87a-4a52-ac85-364597594818\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.830011 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle\") pod \"f76974c5-d87a-4a52-ac85-364597594818\" (UID: \"f76974c5-d87a-4a52-ac85-364597594818\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.836970 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs" (OuterVolumeSpecName: "logs") pod "f76974c5-d87a-4a52-ac85-364597594818" (UID: "f76974c5-d87a-4a52-ac85-364597594818"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.883192 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.893559 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf" (OuterVolumeSpecName: "kube-api-access-k96hf") pod "f76974c5-d87a-4a52-ac85-364597594818" (UID: "f76974c5-d87a-4a52-ac85-364597594818"). InnerVolumeSpecName "kube-api-access-k96hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.933241 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data\") pod \"d54442ab-bec7-429c-ae47-6c781844eb4b\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.933522 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6mlg\" (UniqueName: \"kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg\") pod \"d54442ab-bec7-429c-ae47-6c781844eb4b\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.933722 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle\") pod \"d54442ab-bec7-429c-ae47-6c781844eb4b\" (UID: \"d54442ab-bec7-429c-ae47-6c781844eb4b\") " Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.934498 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76974c5-d87a-4a52-ac85-364597594818-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.934570 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k96hf\" (UniqueName: \"kubernetes.io/projected/f76974c5-d87a-4a52-ac85-364597594818-kube-api-access-k96hf\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.941142 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg" (OuterVolumeSpecName: "kube-api-access-k6mlg") pod "d54442ab-bec7-429c-ae47-6c781844eb4b" (UID: "d54442ab-bec7-429c-ae47-6c781844eb4b"). InnerVolumeSpecName "kube-api-access-k6mlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.954362 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data" (OuterVolumeSpecName: "config-data") pod "f76974c5-d87a-4a52-ac85-364597594818" (UID: "f76974c5-d87a-4a52-ac85-364597594818"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.955708 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76974c5-d87a-4a52-ac85-364597594818" (UID: "f76974c5-d87a-4a52-ac85-364597594818"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:53 crc kubenswrapper[4847]: I0218 00:50:53.978415 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data" (OuterVolumeSpecName: "config-data") pod "d54442ab-bec7-429c-ae47-6c781844eb4b" (UID: "d54442ab-bec7-429c-ae47-6c781844eb4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.027773 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d54442ab-bec7-429c-ae47-6c781844eb4b" (UID: "d54442ab-bec7-429c-ae47-6c781844eb4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.041828 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.041867 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76974c5-d87a-4a52-ac85-364597594818-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.041883 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.041893 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6mlg\" (UniqueName: \"kubernetes.io/projected/d54442ab-bec7-429c-ae47-6c781844eb4b-kube-api-access-k6mlg\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.041902 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d54442ab-bec7-429c-ae47-6c781844eb4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.727615 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d54442ab-bec7-429c-ae47-6c781844eb4b","Type":"ContainerDied","Data":"cac5780920d91cb2facebdb2c5571bf1c241a76b8c8ed383939593e5cbfa1523"} Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.728763 4847 scope.go:117] "RemoveContainer" containerID="7ec4b758586d9607221da99ec8f3966dd12b11114148d7700282698bfce92415" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.727658 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743885 4847 generic.go:334] "Generic (PLEG): container finished" podID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerID="d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88" exitCode=0 Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743918 4847 generic.go:334] "Generic (PLEG): container finished" podID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerID="1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5" exitCode=0 Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743927 4847 generic.go:334] "Generic (PLEG): container finished" podID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerID="2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f" exitCode=0 Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743930 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerDied","Data":"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88"} Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743981 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerDied","Data":"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5"} Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.743991 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerDied","Data":"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f"} Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.744002 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.773244 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.781715 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.782438 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.782505 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.783973 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.815287 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.832680 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.851217 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: E0218 00:50:54.883059 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-log" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.883160 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-log" Feb 18 00:50:54 crc kubenswrapper[4847]: E0218 00:50:54.883179 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d54442ab-bec7-429c-ae47-6c781844eb4b" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.883188 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d54442ab-bec7-429c-ae47-6c781844eb4b" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:50:54 crc kubenswrapper[4847]: E0218 00:50:54.883205 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-metadata" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.883212 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-metadata" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.902317 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-metadata" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.902889 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d54442ab-bec7-429c-ae47-6c781844eb4b" containerName="nova-cell1-novncproxy-novncproxy" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.902982 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76974c5-d87a-4a52-ac85-364597594818" containerName="nova-metadata-log" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.904048 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.910128 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.910237 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.910412 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.913642 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.923907 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.934640 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.936419 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.941009 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.941244 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:50:54 crc kubenswrapper[4847]: I0218 00:50:54.955286 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065203 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48qs\" (UniqueName: \"kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065306 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065337 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065425 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065453 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpqg\" (UniqueName: \"kubernetes.io/projected/66c0dd63-bd8e-44ce-bd9a-edd421f59682-kube-api-access-8xpqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065489 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065506 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065525 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065552 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.065640 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.167734 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.167854 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48qs\" (UniqueName: \"kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.167942 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.167977 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168027 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168056 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpqg\" (UniqueName: \"kubernetes.io/projected/66c0dd63-bd8e-44ce-bd9a-edd421f59682-kube-api-access-8xpqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168095 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168118 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168137 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168171 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.168301 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.173905 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.173944 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.174440 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.174931 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.175591 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.176772 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/66c0dd63-bd8e-44ce-bd9a-edd421f59682-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.178045 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.185820 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48qs\" (UniqueName: \"kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs\") pod \"nova-metadata-0\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.189295 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpqg\" (UniqueName: \"kubernetes.io/projected/66c0dd63-bd8e-44ce-bd9a-edd421f59682-kube-api-access-8xpqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"66c0dd63-bd8e-44ce-bd9a-edd421f59682\") " pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.228786 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.259791 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.444679 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54442ab-bec7-429c-ae47-6c781844eb4b" path="/var/lib/kubelet/pods/d54442ab-bec7-429c-ae47-6c781844eb4b/volumes" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.445256 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76974c5-d87a-4a52-ac85-364597594818" path="/var/lib/kubelet/pods/f76974c5-d87a-4a52-ac85-364597594818/volumes" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.762586 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.765995 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.831385 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 18 00:50:55 crc kubenswrapper[4847]: I0218 00:50:55.967924 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.012976 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.032047 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.033070 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104089 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104159 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104183 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104237 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9nz\" (UniqueName: \"kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104290 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.104314 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.205756 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.205800 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.205858 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9nz\" (UniqueName: \"kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.205913 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.205939 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.206001 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.206862 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.207354 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.207869 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.208569 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.209107 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.227794 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9nz\" (UniqueName: \"kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz\") pod \"dnsmasq-dns-f84f9ccf-5tfxd\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.481823 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.779358 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66c0dd63-bd8e-44ce-bd9a-edd421f59682","Type":"ContainerStarted","Data":"22b584c05506f4ddc3539ddac976805539e3a9650065779dd84ae6ad40d1cff5"} Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.779817 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"66c0dd63-bd8e-44ce-bd9a-edd421f59682","Type":"ContainerStarted","Data":"994e4fae04818fa43d54dbe079157e46e931e7c0bf3e086df024de103def14e0"} Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.784491 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerStarted","Data":"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb"} Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.784541 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerStarted","Data":"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5"} Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.784556 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerStarted","Data":"7d98dd09e3c9e49e29bd7b2980cc766d89bc1ef2ace3896024d6e4da2f0cf3f7"} Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.801819 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.801798688 podStartE2EDuration="2.801798688s" podCreationTimestamp="2026-02-18 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:56.798912858 +0000 UTC m=+1530.176263800" watchObservedRunningTime="2026-02-18 00:50:56.801798688 +0000 UTC m=+1530.179149630" Feb 18 00:50:56 crc kubenswrapper[4847]: I0218 00:50:56.838626 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.838589925 podStartE2EDuration="2.838589925s" podCreationTimestamp="2026-02-18 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:56.82510746 +0000 UTC m=+1530.202458402" watchObservedRunningTime="2026-02-18 00:50:56.838589925 +0000 UTC m=+1530.215940867" Feb 18 00:50:57 crc kubenswrapper[4847]: I0218 00:50:57.039977 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:50:57 crc kubenswrapper[4847]: W0218 00:50:57.042083 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c8be14a_5fbf_40ea_aa45_ea6b6474f281.slice/crio-3739451864dc258f92db7d4729cc68b6c96f0842770265f0da8ac7a030c30311 WatchSource:0}: Error finding container 3739451864dc258f92db7d4729cc68b6c96f0842770265f0da8ac7a030c30311: Status 404 returned error can't find the container with id 3739451864dc258f92db7d4729cc68b6c96f0842770265f0da8ac7a030c30311 Feb 18 00:50:57 crc kubenswrapper[4847]: I0218 00:50:57.795843 4847 generic.go:334] "Generic (PLEG): container finished" podID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerID="10e3f7e4198522ee34cd7815d728d63ff8bc5a2c434c6680e89639e6b181c343" exitCode=0 Feb 18 00:50:57 crc kubenswrapper[4847]: I0218 00:50:57.795923 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" event={"ID":"9c8be14a-5fbf-40ea-aa45-ea6b6474f281","Type":"ContainerDied","Data":"10e3f7e4198522ee34cd7815d728d63ff8bc5a2c434c6680e89639e6b181c343"} Feb 18 00:50:57 crc kubenswrapper[4847]: I0218 00:50:57.796373 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" event={"ID":"9c8be14a-5fbf-40ea-aa45-ea6b6474f281","Type":"ContainerStarted","Data":"3739451864dc258f92db7d4729cc68b6c96f0842770265f0da8ac7a030c30311"} Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.416433 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.774935 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.814775 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0577c3f-a57d-4691-861b-3107614b86bc" containerID="d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977" exitCode=137 Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.814834 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerDied","Data":"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977"} Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.814862 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0577c3f-a57d-4691-861b-3107614b86bc","Type":"ContainerDied","Data":"af83b5ae588b4c5af0e030244419851562664a562f959ff087c1d1e578f7363d"} Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.814880 4847 scope.go:117] "RemoveContainer" containerID="d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.815005 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.818765 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" event={"ID":"9c8be14a-5fbf-40ea-aa45-ea6b6474f281","Type":"ContainerStarted","Data":"8687f26c929ad42a1aeb726b2eb5122494297f92187014c7c153522bb3feaeef"} Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.819141 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.818929 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-api" containerID="cri-o://599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7" gracePeriod=30 Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.818833 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-log" containerID="cri-o://e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec" gracePeriod=30 Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.852017 4847 scope.go:117] "RemoveContainer" containerID="ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.852054 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" podStartSLOduration=3.852010184 podStartE2EDuration="3.852010184s" podCreationTimestamp="2026-02-18 00:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:50:58.846142302 +0000 UTC m=+1532.223493244" watchObservedRunningTime="2026-02-18 00:50:58.852010184 +0000 UTC m=+1532.229361126" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.873537 4847 scope.go:117] "RemoveContainer" containerID="8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.880219 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.880467 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.880697 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7hcs\" (UniqueName: \"kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.880850 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.881099 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.881433 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.881566 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.881659 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd\") pod \"a0577c3f-a57d-4691-861b-3107614b86bc\" (UID: \"a0577c3f-a57d-4691-861b-3107614b86bc\") " Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.883629 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.884148 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.903449 4847 scope.go:117] "RemoveContainer" containerID="1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.909417 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs" (OuterVolumeSpecName: "kube-api-access-s7hcs") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "kube-api-access-s7hcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.909423 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts" (OuterVolumeSpecName: "scripts") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.942196 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.950439 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.967718 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984581 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984626 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984636 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0577c3f-a57d-4691-861b-3107614b86bc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984646 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984655 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984665 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7hcs\" (UniqueName: \"kubernetes.io/projected/a0577c3f-a57d-4691-861b-3107614b86bc-kube-api-access-s7hcs\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:58 crc kubenswrapper[4847]: I0218 00:50:58.984674 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.035955 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data" (OuterVolumeSpecName: "config-data") pod "a0577c3f-a57d-4691-861b-3107614b86bc" (UID: "a0577c3f-a57d-4691-861b-3107614b86bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.086916 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0577c3f-a57d-4691-861b-3107614b86bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.133222 4847 scope.go:117] "RemoveContainer" containerID="d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.134251 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977\": container with ID starting with d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977 not found: ID does not exist" containerID="d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.134291 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977"} err="failed to get container status \"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977\": rpc error: code = NotFound desc = could not find container \"d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977\": container with ID starting with d777596b7a54d0cda166aa5780ac0febf8394107020b950d8e10e7e588201977 not found: ID does not exist" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.134317 4847 scope.go:117] "RemoveContainer" containerID="ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.134800 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b\": container with ID starting with ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b not found: ID does not exist" containerID="ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.134845 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b"} err="failed to get container status \"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b\": rpc error: code = NotFound desc = could not find container \"ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b\": container with ID starting with ca4657282a725e909bccf87b02f2181447171a04bfa88a2f454f91a55571611b not found: ID does not exist" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.134867 4847 scope.go:117] "RemoveContainer" containerID="8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.135180 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc\": container with ID starting with 8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc not found: ID does not exist" containerID="8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.135223 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc"} err="failed to get container status \"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc\": rpc error: code = NotFound desc = could not find container \"8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc\": container with ID starting with 8b1bf6e75c0e371d61fa751126ed3a189072641bfb79c0aca80fc0080f9e65dc not found: ID does not exist" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.135254 4847 scope.go:117] "RemoveContainer" containerID="1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.135764 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a\": container with ID starting with 1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a not found: ID does not exist" containerID="1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.135789 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a"} err="failed to get container status \"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a\": rpc error: code = NotFound desc = could not find container \"1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a\": container with ID starting with 1745b060abde282a4918561646f01fba2bd009c32ab6d0e5503f2e0d1997780a not found: ID does not exist" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.156456 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.169115 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184005 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.184647 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-central-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184671 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-central-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.184685 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-notification-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184693 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-notification-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.184704 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="sg-core" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184717 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="sg-core" Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.184739 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="proxy-httpd" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184745 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="proxy-httpd" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184955 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="proxy-httpd" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184976 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-central-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184985 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="sg-core" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.184996 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" containerName="ceilometer-notification-agent" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.187368 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.189920 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.190538 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.194235 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.200261 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.290556 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.290961 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2h7\" (UniqueName: \"kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291124 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291194 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291297 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291417 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291521 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.291791 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.304729 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:50:59 crc kubenswrapper[4847]: E0218 00:50:59.305518 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-sq2h7 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="6fd48768-852e-4360-919b-24b8f38bd2b2" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393688 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393773 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393828 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393867 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393905 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.393986 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.394100 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.394208 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2h7\" (UniqueName: \"kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.394713 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.394949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.400705 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.401707 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.402742 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.403374 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.410133 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2h7\" (UniqueName: \"kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.410782 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts\") pod \"ceilometer-0\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.424366 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0577c3f-a57d-4691-861b-3107614b86bc" path="/var/lib/kubelet/pods/a0577c3f-a57d-4691-861b-3107614b86bc/volumes" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.831059 4847 generic.go:334] "Generic (PLEG): container finished" podID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerID="e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec" exitCode=143 Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.831111 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerDied","Data":"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec"} Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.832792 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.841807 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.905954 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906024 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906058 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906107 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906189 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906281 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906364 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906396 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2h7\" (UniqueName: \"kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7\") pod \"6fd48768-852e-4360-919b-24b8f38bd2b2\" (UID: \"6fd48768-852e-4360-919b-24b8f38bd2b2\") " Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.906751 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.907001 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.907514 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.907628 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6fd48768-852e-4360-919b-24b8f38bd2b2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.915954 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.916104 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7" (OuterVolumeSpecName: "kube-api-access-sq2h7") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "kube-api-access-sq2h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.916297 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data" (OuterVolumeSpecName: "config-data") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.916892 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.916990 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts" (OuterVolumeSpecName: "scripts") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:50:59 crc kubenswrapper[4847]: I0218 00:50:59.917343 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6fd48768-852e-4360-919b-24b8f38bd2b2" (UID: "6fd48768-852e-4360-919b-24b8f38bd2b2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009773 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009807 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2h7\" (UniqueName: \"kubernetes.io/projected/6fd48768-852e-4360-919b-24b8f38bd2b2-kube-api-access-sq2h7\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009819 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009828 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009837 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.009845 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6fd48768-852e-4360-919b-24b8f38bd2b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.230580 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.261715 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.261774 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.844251 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.942210 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.960045 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.979166 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.981868 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.984413 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.984644 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.984659 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:51:00 crc kubenswrapper[4847]: I0218 00:51:00.990814 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.133277 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.133345 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.133918 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.134011 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.134080 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.134122 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.134231 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.134284 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236670 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236737 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236771 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236796 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236860 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.236902 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.237010 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.237037 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.238110 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.238124 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.243647 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.243725 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.244010 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.245728 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.250318 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.257423 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8\") pod \"ceilometer-0\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.306695 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.333811 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.408295 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:51:01 crc kubenswrapper[4847]: E0218 00:51:01.408808 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.417428 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd48768-852e-4360-919b-24b8f38bd2b2" path="/var/lib/kubelet/pods/6fd48768-852e-4360-919b-24b8f38bd2b2/volumes" Feb 18 00:51:01 crc kubenswrapper[4847]: I0218 00:51:01.871548 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.579028 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.722384 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle\") pod \"ced9424c-96b3-40f4-801b-4f817c0845a7\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.722474 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs\") pod \"ced9424c-96b3-40f4-801b-4f817c0845a7\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.722538 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data\") pod \"ced9424c-96b3-40f4-801b-4f817c0845a7\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.722652 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4z9m\" (UniqueName: \"kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m\") pod \"ced9424c-96b3-40f4-801b-4f817c0845a7\" (UID: \"ced9424c-96b3-40f4-801b-4f817c0845a7\") " Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.722896 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs" (OuterVolumeSpecName: "logs") pod "ced9424c-96b3-40f4-801b-4f817c0845a7" (UID: "ced9424c-96b3-40f4-801b-4f817c0845a7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.723550 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ced9424c-96b3-40f4-801b-4f817c0845a7-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.727950 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m" (OuterVolumeSpecName: "kube-api-access-l4z9m") pod "ced9424c-96b3-40f4-801b-4f817c0845a7" (UID: "ced9424c-96b3-40f4-801b-4f817c0845a7"). InnerVolumeSpecName "kube-api-access-l4z9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.757763 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data" (OuterVolumeSpecName: "config-data") pod "ced9424c-96b3-40f4-801b-4f817c0845a7" (UID: "ced9424c-96b3-40f4-801b-4f817c0845a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.778568 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ced9424c-96b3-40f4-801b-4f817c0845a7" (UID: "ced9424c-96b3-40f4-801b-4f817c0845a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.825040 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.825073 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ced9424c-96b3-40f4-801b-4f817c0845a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.825083 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4z9m\" (UniqueName: \"kubernetes.io/projected/ced9424c-96b3-40f4-801b-4f817c0845a7-kube-api-access-l4z9m\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.867876 4847 generic.go:334] "Generic (PLEG): container finished" podID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerID="599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7" exitCode=0 Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.867931 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.867960 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerDied","Data":"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7"} Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.867997 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ced9424c-96b3-40f4-801b-4f817c0845a7","Type":"ContainerDied","Data":"a0daadb055480fddbfe0b05962be57bb076eac0a7fdd899040e06abe7cc41746"} Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.868018 4847 scope.go:117] "RemoveContainer" containerID="599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.870117 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerStarted","Data":"117299d45a910586ea273e72c6ca0bf14bec6a6bb258d63ed2fd5d97eee2cba3"} Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.870163 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerStarted","Data":"6c6dc944078bc9c0412ca7255de3e69d5176e177e89dac55eb3bb8925e85c4f9"} Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.887830 4847 scope.go:117] "RemoveContainer" containerID="e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.906634 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.907287 4847 scope.go:117] "RemoveContainer" containerID="599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7" Feb 18 00:51:02 crc kubenswrapper[4847]: E0218 00:51:02.907789 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7\": container with ID starting with 599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7 not found: ID does not exist" containerID="599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.907835 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7"} err="failed to get container status \"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7\": rpc error: code = NotFound desc = could not find container \"599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7\": container with ID starting with 599d8e16a42121604947d8d8e89eedda877fd44208cb7ed93f3b2abc6e9956d7 not found: ID does not exist" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.907863 4847 scope.go:117] "RemoveContainer" containerID="e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec" Feb 18 00:51:02 crc kubenswrapper[4847]: E0218 00:51:02.908146 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec\": container with ID starting with e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec not found: ID does not exist" containerID="e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.908196 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec"} err="failed to get container status \"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec\": rpc error: code = NotFound desc = could not find container \"e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec\": container with ID starting with e316e0190c1c06168503ad9ab13cf990bd7380b0a56fa49f192bc5c4dc9608ec not found: ID does not exist" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.915998 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.939486 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:02 crc kubenswrapper[4847]: E0218 00:51:02.939977 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-log" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.939993 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-log" Feb 18 00:51:02 crc kubenswrapper[4847]: E0218 00:51:02.940009 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-api" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.940016 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-api" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.940237 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-log" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.940260 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" containerName="nova-api-api" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.941438 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.945361 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.945686 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.945800 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 00:51:02 crc kubenswrapper[4847]: I0218 00:51:02.951725 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131248 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcp6\" (UniqueName: \"kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131537 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131560 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131593 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131932 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.131980 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234376 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxcp6\" (UniqueName: \"kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234461 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234486 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234531 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234647 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.234666 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.235523 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.238750 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.238894 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.239332 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.239811 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.270822 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxcp6\" (UniqueName: \"kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6\") pod \"nova-api-0\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.418705 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced9424c-96b3-40f4-801b-4f817c0845a7" path="/var/lib/kubelet/pods/ced9424c-96b3-40f4-801b-4f817c0845a7/volumes" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.563562 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:03 crc kubenswrapper[4847]: I0218 00:51:03.889927 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerStarted","Data":"894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9"} Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.097125 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.903123 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerStarted","Data":"7efde83610347c0d025e46c8e7d680a2ae94fbb154e2d435215c941d02a50b88"} Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.904929 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerStarted","Data":"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15"} Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.904962 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerStarted","Data":"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da"} Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.904975 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerStarted","Data":"b011d105e7f312b48a138d38a04199b9be99989915b8d3c393444ff9bef926f5"} Feb 18 00:51:04 crc kubenswrapper[4847]: I0218 00:51:04.941590 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.94156906 podStartE2EDuration="2.94156906s" podCreationTimestamp="2026-02-18 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:04.929288424 +0000 UTC m=+1538.306639366" watchObservedRunningTime="2026-02-18 00:51:04.94156906 +0000 UTC m=+1538.318919992" Feb 18 00:51:05 crc kubenswrapper[4847]: I0218 00:51:05.230553 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:51:05 crc kubenswrapper[4847]: I0218 00:51:05.256194 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:51:05 crc kubenswrapper[4847]: I0218 00:51:05.261066 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:51:05 crc kubenswrapper[4847]: I0218 00:51:05.261324 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.121223 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.281883 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.281891 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.369685 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7jkqm"] Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.371083 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.374132 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.385525 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.394594 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7jkqm"] Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.414759 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.414830 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8z2p\" (UniqueName: \"kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.414903 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.414937 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.483815 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.517355 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.517427 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8z2p\" (UniqueName: \"kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.517479 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.517512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.523757 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.525083 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.532142 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.549004 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8z2p\" (UniqueName: \"kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p\") pod \"nova-cell1-cell-mapping-7jkqm\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.577344 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.577892 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="dnsmasq-dns" containerID="cri-o://dc3d0fb14c2275dac72d9aeb961f788e14afeb57963c31c02378802f106304c0" gracePeriod=10 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.691113 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.969989 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerStarted","Data":"64911e2822c78a88847f42b182b440fbdaa8d33c3796b8e3dfea2abd7889134a"} Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.970163 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-central-agent" containerID="cri-o://117299d45a910586ea273e72c6ca0bf14bec6a6bb258d63ed2fd5d97eee2cba3" gracePeriod=30 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.970397 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.970457 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1949379c-49e4-405f-8716-254129699489" containerName="proxy-httpd" containerID="cri-o://64911e2822c78a88847f42b182b440fbdaa8d33c3796b8e3dfea2abd7889134a" gracePeriod=30 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.970544 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-notification-agent" containerID="cri-o://894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9" gracePeriod=30 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.970584 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1949379c-49e4-405f-8716-254129699489" containerName="sg-core" containerID="cri-o://7efde83610347c0d025e46c8e7d680a2ae94fbb154e2d435215c941d02a50b88" gracePeriod=30 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.993402 4847 generic.go:334] "Generic (PLEG): container finished" podID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerID="dc3d0fb14c2275dac72d9aeb961f788e14afeb57963c31c02378802f106304c0" exitCode=0 Feb 18 00:51:06 crc kubenswrapper[4847]: I0218 00:51:06.993682 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" event={"ID":"f4e7ff27-612f-4c09-83a5-6405f65f4f86","Type":"ContainerDied","Data":"dc3d0fb14c2275dac72d9aeb961f788e14afeb57963c31c02378802f106304c0"} Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.000350 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.110895696 podStartE2EDuration="7.000332543s" podCreationTimestamp="2026-02-18 00:51:00 +0000 UTC" firstStartedPulling="2026-02-18 00:51:01.858013097 +0000 UTC m=+1535.235364069" lastFinishedPulling="2026-02-18 00:51:05.747449974 +0000 UTC m=+1539.124800916" observedRunningTime="2026-02-18 00:51:06.999163725 +0000 UTC m=+1540.376514677" watchObservedRunningTime="2026-02-18 00:51:07.000332543 +0000 UTC m=+1540.377683485" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.308387 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.339759 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.340155 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.340211 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdknr\" (UniqueName: \"kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.340240 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.340294 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.340321 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc\") pod \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\" (UID: \"f4e7ff27-612f-4c09-83a5-6405f65f4f86\") " Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.385018 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr" (OuterVolumeSpecName: "kube-api-access-mdknr") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "kube-api-access-mdknr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.452162 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdknr\" (UniqueName: \"kubernetes.io/projected/f4e7ff27-612f-4c09-83a5-6405f65f4f86-kube-api-access-mdknr\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.468853 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7jkqm"] Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.469200 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config" (OuterVolumeSpecName: "config") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.491892 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.499135 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.517937 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.520505 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4e7ff27-612f-4c09-83a5-6405f65f4f86" (UID: "f4e7ff27-612f-4c09-83a5-6405f65f4f86"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.553832 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.553867 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.553878 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.553891 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: I0218 00:51:07.553899 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e7ff27-612f-4c09-83a5-6405f65f4f86-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:07 crc kubenswrapper[4847]: E0218 00:51:07.703588 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1949379c_49e4_405f_8716_254129699489.slice/crio-894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1949379c_49e4_405f_8716_254129699489.slice/crio-conmon-894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.005202 4847 generic.go:334] "Generic (PLEG): container finished" podID="1949379c-49e4-405f-8716-254129699489" containerID="64911e2822c78a88847f42b182b440fbdaa8d33c3796b8e3dfea2abd7889134a" exitCode=0 Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.005698 4847 generic.go:334] "Generic (PLEG): container finished" podID="1949379c-49e4-405f-8716-254129699489" containerID="7efde83610347c0d025e46c8e7d680a2ae94fbb154e2d435215c941d02a50b88" exitCode=2 Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.005763 4847 generic.go:334] "Generic (PLEG): container finished" podID="1949379c-49e4-405f-8716-254129699489" containerID="894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9" exitCode=0 Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.005260 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerDied","Data":"64911e2822c78a88847f42b182b440fbdaa8d33c3796b8e3dfea2abd7889134a"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.005949 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerDied","Data":"7efde83610347c0d025e46c8e7d680a2ae94fbb154e2d435215c941d02a50b88"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.006032 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerDied","Data":"894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.007989 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7jkqm" event={"ID":"ab094cba-aca4-4ea7-a5a9-b13d4ac35263","Type":"ContainerStarted","Data":"a327da8e984da18ce271e2a004d1ff5af75dab0a1caccb5ab62599cf9859d244"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.008022 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7jkqm" event={"ID":"ab094cba-aca4-4ea7-a5a9-b13d4ac35263","Type":"ContainerStarted","Data":"2c062a514143f38cc197fb5284bfebd18fc7c106e8e3f4ecc43d931094478503"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.010345 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" event={"ID":"f4e7ff27-612f-4c09-83a5-6405f65f4f86","Type":"ContainerDied","Data":"e7dda8c3639cf5bf1d5ecbb09bb556998057e40025f8c225f0697f528d25d383"} Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.010416 4847 scope.go:117] "RemoveContainer" containerID="dc3d0fb14c2275dac72d9aeb961f788e14afeb57963c31c02378802f106304c0" Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.010614 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xmxcs" Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.036811 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7jkqm" podStartSLOduration=2.036790754 podStartE2EDuration="2.036790754s" podCreationTimestamp="2026-02-18 00:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:08.025166704 +0000 UTC m=+1541.402517646" watchObservedRunningTime="2026-02-18 00:51:08.036790754 +0000 UTC m=+1541.414141686" Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.045425 4847 scope.go:117] "RemoveContainer" containerID="93ea82dfc5f84fc14afed49caad664b1fb6f8bbf78e132cb51c1ad7d6b9fd6e0" Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.083868 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:51:08 crc kubenswrapper[4847]: I0218 00:51:08.106728 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xmxcs"] Feb 18 00:51:09 crc kubenswrapper[4847]: I0218 00:51:09.425885 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" path="/var/lib/kubelet/pods/f4e7ff27-612f-4c09-83a5-6405f65f4f86/volumes" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.075393 4847 generic.go:334] "Generic (PLEG): container finished" podID="1949379c-49e4-405f-8716-254129699489" containerID="117299d45a910586ea273e72c6ca0bf14bec6a6bb258d63ed2fd5d97eee2cba3" exitCode=0 Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.075467 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerDied","Data":"117299d45a910586ea273e72c6ca0bf14bec6a6bb258d63ed2fd5d97eee2cba3"} Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.076067 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1949379c-49e4-405f-8716-254129699489","Type":"ContainerDied","Data":"6c6dc944078bc9c0412ca7255de3e69d5176e177e89dac55eb3bb8925e85c4f9"} Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.076086 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c6dc944078bc9c0412ca7255de3e69d5176e177e89dac55eb3bb8925e85c4f9" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.146295 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.164472 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165471 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165592 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165657 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165747 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165793 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165851 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.165876 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data\") pod \"1949379c-49e4-405f-8716-254129699489\" (UID: \"1949379c-49e4-405f-8716-254129699489\") " Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.166327 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.166422 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.199811 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts" (OuterVolumeSpecName: "scripts") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.203961 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.204008 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.204023 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1949379c-49e4-405f-8716-254129699489-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.214010 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8" (OuterVolumeSpecName: "kube-api-access-9pwh8") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "kube-api-access-9pwh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.282498 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.308231 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/1949379c-49e4-405f-8716-254129699489-kube-api-access-9pwh8\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.308259 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.331582 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.401696 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.410383 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.410414 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.416257 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data" (OuterVolumeSpecName: "config-data") pod "1949379c-49e4-405f-8716-254129699489" (UID: "1949379c-49e4-405f-8716-254129699489"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:12 crc kubenswrapper[4847]: I0218 00:51:12.513072 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1949379c-49e4-405f-8716-254129699489-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.090276 4847 generic.go:334] "Generic (PLEG): container finished" podID="ab094cba-aca4-4ea7-a5a9-b13d4ac35263" containerID="a327da8e984da18ce271e2a004d1ff5af75dab0a1caccb5ab62599cf9859d244" exitCode=0 Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.090380 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7jkqm" event={"ID":"ab094cba-aca4-4ea7-a5a9-b13d4ac35263","Type":"ContainerDied","Data":"a327da8e984da18ce271e2a004d1ff5af75dab0a1caccb5ab62599cf9859d244"} Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.090752 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.145777 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.161454 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179250 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179738 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-central-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179756 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-central-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179778 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="init" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179786 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="init" Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179799 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1949379c-49e4-405f-8716-254129699489" containerName="sg-core" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179806 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1949379c-49e4-405f-8716-254129699489" containerName="sg-core" Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179812 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1949379c-49e4-405f-8716-254129699489" containerName="proxy-httpd" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179818 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1949379c-49e4-405f-8716-254129699489" containerName="proxy-httpd" Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179826 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-notification-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179832 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-notification-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: E0218 00:51:13.179841 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="dnsmasq-dns" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.179847 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="dnsmasq-dns" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.180024 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1949379c-49e4-405f-8716-254129699489" containerName="sg-core" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.180037 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-central-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.180049 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e7ff27-612f-4c09-83a5-6405f65f4f86" containerName="dnsmasq-dns" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.180065 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1949379c-49e4-405f-8716-254129699489" containerName="ceilometer-notification-agent" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.180073 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1949379c-49e4-405f-8716-254129699489" containerName="proxy-httpd" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.182348 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.186552 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.186592 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.186906 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.199111 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231572 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231769 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2qp\" (UniqueName: \"kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231844 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231886 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231919 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.231947 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.232147 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.232303 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.334216 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.334287 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2qp\" (UniqueName: \"kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.334994 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335039 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335059 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335091 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335135 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335170 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335333 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.335542 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.341900 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.342020 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.350278 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.350558 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.351000 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.356526 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2qp\" (UniqueName: \"kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp\") pod \"ceilometer-0\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.421148 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1949379c-49e4-405f-8716-254129699489" path="/var/lib/kubelet/pods/1949379c-49e4-405f-8716-254129699489/volumes" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.501890 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.565022 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:51:13 crc kubenswrapper[4847]: I0218 00:51:13.565421 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.065743 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:14 crc kubenswrapper[4847]: W0218 00:51:14.070294 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108d0d51_c527_4d5b_8129_0e0df3e355c2.slice/crio-fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976 WatchSource:0}: Error finding container fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976: Status 404 returned error can't find the container with id fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976 Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.148703 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerStarted","Data":"fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976"} Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.404477 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:51:14 crc kubenswrapper[4847]: E0218 00:51:14.404977 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.587807 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.587813 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.243:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.685483 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.768211 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle\") pod \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.768276 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts\") pod \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.768356 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8z2p\" (UniqueName: \"kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p\") pod \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.768550 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data\") pod \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\" (UID: \"ab094cba-aca4-4ea7-a5a9-b13d4ac35263\") " Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.774716 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts" (OuterVolumeSpecName: "scripts") pod "ab094cba-aca4-4ea7-a5a9-b13d4ac35263" (UID: "ab094cba-aca4-4ea7-a5a9-b13d4ac35263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.776366 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p" (OuterVolumeSpecName: "kube-api-access-w8z2p") pod "ab094cba-aca4-4ea7-a5a9-b13d4ac35263" (UID: "ab094cba-aca4-4ea7-a5a9-b13d4ac35263"). InnerVolumeSpecName "kube-api-access-w8z2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.806457 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab094cba-aca4-4ea7-a5a9-b13d4ac35263" (UID: "ab094cba-aca4-4ea7-a5a9-b13d4ac35263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.808846 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data" (OuterVolumeSpecName: "config-data") pod "ab094cba-aca4-4ea7-a5a9-b13d4ac35263" (UID: "ab094cba-aca4-4ea7-a5a9-b13d4ac35263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.870868 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.870899 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.870909 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8z2p\" (UniqueName: \"kubernetes.io/projected/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-kube-api-access-w8z2p\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:14 crc kubenswrapper[4847]: I0218 00:51:14.870919 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab094cba-aca4-4ea7-a5a9-b13d4ac35263-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.162430 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerStarted","Data":"3ea2820168a8de51c02a4f24b4add952ccb5457d1e7772e6e8c533a559ebc60b"} Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.164970 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7jkqm" event={"ID":"ab094cba-aca4-4ea7-a5a9-b13d4ac35263","Type":"ContainerDied","Data":"2c062a514143f38cc197fb5284bfebd18fc7c106e8e3f4ecc43d931094478503"} Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.165011 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c062a514143f38cc197fb5284bfebd18fc7c106e8e3f4ecc43d931094478503" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.165067 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7jkqm" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.284691 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.284814 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.329933 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.350174 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.350446 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" containerName="nova-scheduler-scheduler" containerID="cri-o://a953a1d8b5fad80f99851cb6e362a10662d6f7e5c265d5247a60e999b411950a" gracePeriod=30 Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.375298 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.375562 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-log" containerID="cri-o://7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da" gracePeriod=30 Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.376187 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-api" containerID="cri-o://8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15" gracePeriod=30 Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.384639 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:51:15 crc kubenswrapper[4847]: I0218 00:51:15.511163 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:16 crc kubenswrapper[4847]: I0218 00:51:16.175809 4847 generic.go:334] "Generic (PLEG): container finished" podID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerID="7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da" exitCode=143 Feb 18 00:51:16 crc kubenswrapper[4847]: I0218 00:51:16.175889 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerDied","Data":"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da"} Feb 18 00:51:16 crc kubenswrapper[4847]: I0218 00:51:16.180623 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerStarted","Data":"d348f31e549a45daa9b07e9273e5b941de4bacfc801b3d6868e71d7edeffa6af"} Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.201746 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerStarted","Data":"bf16dec9289475e1b58924b3fcb59776a5f8705c970fce17296c1e5a82cdd2c5"} Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204121 4847 generic.go:334] "Generic (PLEG): container finished" podID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" containerID="a953a1d8b5fad80f99851cb6e362a10662d6f7e5c265d5247a60e999b411950a" exitCode=0 Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204354 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" containerID="cri-o://09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5" gracePeriod=30 Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204652 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e90c4a6-c9f2-4487-ab03-f98ce10417bd","Type":"ContainerDied","Data":"a953a1d8b5fad80f99851cb6e362a10662d6f7e5c265d5247a60e999b411950a"} Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204753 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4e90c4a6-c9f2-4487-ab03-f98ce10417bd","Type":"ContainerDied","Data":"98f5c3309b6de687bff35785dd0af58d11cc11427ebd0676d13875fc52112799"} Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204830 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f5c3309b6de687bff35785dd0af58d11cc11427ebd0676d13875fc52112799" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.204781 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" containerID="cri-o://be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb" gracePeriod=30 Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.271768 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.365637 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data\") pod \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.365992 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45d5h\" (UniqueName: \"kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h\") pod \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.366017 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle\") pod \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\" (UID: \"4e90c4a6-c9f2-4487-ab03-f98ce10417bd\") " Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.371362 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h" (OuterVolumeSpecName: "kube-api-access-45d5h") pod "4e90c4a6-c9f2-4487-ab03-f98ce10417bd" (UID: "4e90c4a6-c9f2-4487-ab03-f98ce10417bd"). InnerVolumeSpecName "kube-api-access-45d5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.417866 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data" (OuterVolumeSpecName: "config-data") pod "4e90c4a6-c9f2-4487-ab03-f98ce10417bd" (UID: "4e90c4a6-c9f2-4487-ab03-f98ce10417bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.441957 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e90c4a6-c9f2-4487-ab03-f98ce10417bd" (UID: "4e90c4a6-c9f2-4487-ab03-f98ce10417bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.468699 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45d5h\" (UniqueName: \"kubernetes.io/projected/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-kube-api-access-45d5h\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.468948 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:17 crc kubenswrapper[4847]: I0218 00:51:17.469070 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e90c4a6-c9f2-4487-ab03-f98ce10417bd-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.219985 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerStarted","Data":"3762d3bdfb43664204c4ac87da22ae93f428cd087f160ef6a0509417461d225f"} Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.220472 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.234086 4847 generic.go:334] "Generic (PLEG): container finished" podID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerID="09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5" exitCode=143 Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.234181 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.234484 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerDied","Data":"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5"} Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.251928 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.162822374 podStartE2EDuration="5.251868139s" podCreationTimestamp="2026-02-18 00:51:13 +0000 UTC" firstStartedPulling="2026-02-18 00:51:14.076071699 +0000 UTC m=+1547.453422681" lastFinishedPulling="2026-02-18 00:51:17.165117504 +0000 UTC m=+1550.542468446" observedRunningTime="2026-02-18 00:51:18.239058941 +0000 UTC m=+1551.616409883" watchObservedRunningTime="2026-02-18 00:51:18.251868139 +0000 UTC m=+1551.629219081" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.301580 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.334739 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.356139 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:18 crc kubenswrapper[4847]: E0218 00:51:18.357866 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" containerName="nova-scheduler-scheduler" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.359283 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" containerName="nova-scheduler-scheduler" Feb 18 00:51:18 crc kubenswrapper[4847]: E0218 00:51:18.359406 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab094cba-aca4-4ea7-a5a9-b13d4ac35263" containerName="nova-manage" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.359459 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab094cba-aca4-4ea7-a5a9-b13d4ac35263" containerName="nova-manage" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.359993 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab094cba-aca4-4ea7-a5a9-b13d4ac35263" containerName="nova-manage" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.360118 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" containerName="nova-scheduler-scheduler" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.362301 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.365461 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.380136 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.388487 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bnzp\" (UniqueName: \"kubernetes.io/projected/38e79629-56ea-4262-875c-8dd1efdbd88f-kube-api-access-4bnzp\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.388770 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.389009 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-config-data\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.491776 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bnzp\" (UniqueName: \"kubernetes.io/projected/38e79629-56ea-4262-875c-8dd1efdbd88f-kube-api-access-4bnzp\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.491851 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.491894 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-config-data\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.508058 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bnzp\" (UniqueName: \"kubernetes.io/projected/38e79629-56ea-4262-875c-8dd1efdbd88f-kube-api-access-4bnzp\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.509018 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-config-data\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.510040 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e79629-56ea-4262-875c-8dd1efdbd88f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"38e79629-56ea-4262-875c-8dd1efdbd88f\") " pod="openstack/nova-scheduler-0" Feb 18 00:51:18 crc kubenswrapper[4847]: I0218 00:51:18.685391 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 18 00:51:19 crc kubenswrapper[4847]: I0218 00:51:19.176562 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 18 00:51:19 crc kubenswrapper[4847]: I0218 00:51:19.247497 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38e79629-56ea-4262-875c-8dd1efdbd88f","Type":"ContainerStarted","Data":"52fdfc25b94d7c1179afbee946dad8f6c14579e1d630d490cba7decca4c19745"} Feb 18 00:51:19 crc kubenswrapper[4847]: I0218 00:51:19.419032 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e90c4a6-c9f2-4487-ab03-f98ce10417bd" path="/var/lib/kubelet/pods/4e90c4a6-c9f2-4487-ab03-f98ce10417bd/volumes" Feb 18 00:51:20 crc kubenswrapper[4847]: I0218 00:51:20.263927 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"38e79629-56ea-4262-875c-8dd1efdbd88f","Type":"ContainerStarted","Data":"d6244e51bc51b411d0cffafcc36c6693a703c47ecb54a166a455aefb48f332eb"} Feb 18 00:51:20 crc kubenswrapper[4847]: I0218 00:51:20.291091 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.29107185 podStartE2EDuration="2.29107185s" podCreationTimestamp="2026-02-18 00:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:20.284615284 +0000 UTC m=+1553.661966236" watchObservedRunningTime="2026-02-18 00:51:20.29107185 +0000 UTC m=+1553.668422802" Feb 18 00:51:20 crc kubenswrapper[4847]: I0218 00:51:20.343305 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": read tcp 10.217.0.2:36346->10.217.0.239:8775: read: connection reset by peer" Feb 18 00:51:20 crc kubenswrapper[4847]: I0218 00:51:20.343363 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.239:8775/\": read tcp 10.217.0.2:36362->10.217.0.239:8775: read: connection reset by peer" Feb 18 00:51:20 crc kubenswrapper[4847]: I0218 00:51:20.969217 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.064657 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48qs\" (UniqueName: \"kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs\") pod \"42fa6a77-748b-44bb-8a59-c9b083d917df\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.064993 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs\") pod \"42fa6a77-748b-44bb-8a59-c9b083d917df\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.065153 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle\") pod \"42fa6a77-748b-44bb-8a59-c9b083d917df\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.065188 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data\") pod \"42fa6a77-748b-44bb-8a59-c9b083d917df\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.065335 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs\") pod \"42fa6a77-748b-44bb-8a59-c9b083d917df\" (UID: \"42fa6a77-748b-44bb-8a59-c9b083d917df\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.065722 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs" (OuterVolumeSpecName: "logs") pod "42fa6a77-748b-44bb-8a59-c9b083d917df" (UID: "42fa6a77-748b-44bb-8a59-c9b083d917df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.066025 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42fa6a77-748b-44bb-8a59-c9b083d917df-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.077513 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs" (OuterVolumeSpecName: "kube-api-access-m48qs") pod "42fa6a77-748b-44bb-8a59-c9b083d917df" (UID: "42fa6a77-748b-44bb-8a59-c9b083d917df"). InnerVolumeSpecName "kube-api-access-m48qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.113749 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data" (OuterVolumeSpecName: "config-data") pod "42fa6a77-748b-44bb-8a59-c9b083d917df" (UID: "42fa6a77-748b-44bb-8a59-c9b083d917df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.154234 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42fa6a77-748b-44bb-8a59-c9b083d917df" (UID: "42fa6a77-748b-44bb-8a59-c9b083d917df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.167983 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.168015 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.168025 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48qs\" (UniqueName: \"kubernetes.io/projected/42fa6a77-748b-44bb-8a59-c9b083d917df-kube-api-access-m48qs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.205772 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "42fa6a77-748b-44bb-8a59-c9b083d917df" (UID: "42fa6a77-748b-44bb-8a59-c9b083d917df"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.269471 4847 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/42fa6a77-748b-44bb-8a59-c9b083d917df-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279208 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279394 4847 generic.go:334] "Generic (PLEG): container finished" podID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerID="be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb" exitCode=0 Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279450 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerDied","Data":"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb"} Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279476 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"42fa6a77-748b-44bb-8a59-c9b083d917df","Type":"ContainerDied","Data":"7d98dd09e3c9e49e29bd7b2980cc766d89bc1ef2ace3896024d6e4da2f0cf3f7"} Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279493 4847 scope.go:117] "RemoveContainer" containerID="be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.279582 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.283167 4847 generic.go:334] "Generic (PLEG): container finished" podID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerID="8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15" exitCode=0 Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.283614 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.284132 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerDied","Data":"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15"} Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.284159 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9efc747e-2d6f-4489-a0ed-aca538e54574","Type":"ContainerDied","Data":"b011d105e7f312b48a138d38a04199b9be99989915b8d3c393444ff9bef926f5"} Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.328565 4847 scope.go:117] "RemoveContainer" containerID="09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.351087 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.366151 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.370796 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.370874 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.370914 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.370964 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.371555 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs" (OuterVolumeSpecName: "logs") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.371045 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.371924 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcp6\" (UniqueName: \"kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.374436 4847 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9efc747e-2d6f-4489-a0ed-aca538e54574-logs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.382808 4847 scope.go:117] "RemoveContainer" containerID="be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.398130 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb\": container with ID starting with be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb not found: ID does not exist" containerID="be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.398185 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb"} err="failed to get container status \"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb\": rpc error: code = NotFound desc = could not find container \"be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb\": container with ID starting with be301d0023c1279e1cd5d059aaafd41c73d963d7b40835cff350ec2d3d8728bb not found: ID does not exist" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.398214 4847 scope.go:117] "RemoveContainer" containerID="09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.398669 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5\": container with ID starting with 09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5 not found: ID does not exist" containerID="09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.398828 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5"} err="failed to get container status \"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5\": rpc error: code = NotFound desc = could not find container \"09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5\": container with ID starting with 09669aaae55f26cd367547f17728cfae7cbe7eb6902b11e1fc41fb5e9d600dd5 not found: ID does not exist" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.398972 4847 scope.go:117] "RemoveContainer" containerID="8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.418861 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6" (OuterVolumeSpecName: "kube-api-access-hxcp6") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "kube-api-access-hxcp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.426045 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.427482 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" path="/var/lib/kubelet/pods/42fa6a77-748b-44bb-8a59-c9b083d917df/volumes" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.431375 4847 scope.go:117] "RemoveContainer" containerID="7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.473268 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.497002 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") pod \"9efc747e-2d6f-4489-a0ed-aca538e54574\" (UID: \"9efc747e-2d6f-4489-a0ed-aca538e54574\") " Feb 18 00:51:21 crc kubenswrapper[4847]: W0218 00:51:21.505893 4847 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9efc747e-2d6f-4489-a0ed-aca538e54574/volumes/kubernetes.io~secret/internal-tls-certs Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.505938 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.523070 4847 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.523119 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.523131 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxcp6\" (UniqueName: \"kubernetes.io/projected/9efc747e-2d6f-4489-a0ed-aca538e54574-kube-api-access-hxcp6\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.559276 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data" (OuterVolumeSpecName: "config-data") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.596236 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9efc747e-2d6f-4489-a0ed-aca538e54574" (UID: "9efc747e-2d6f-4489-a0ed-aca538e54574"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.625197 4847 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.625239 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9efc747e-2d6f-4489-a0ed-aca538e54574-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.642420 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.642953 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-log" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.642977 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-log" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.643002 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-api" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643010 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-api" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.643032 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643040 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.643064 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643073 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643335 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-api" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643362 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-metadata" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643394 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="42fa6a77-748b-44bb-8a59-c9b083d917df" containerName="nova-metadata-log" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.643405 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" containerName="nova-api-log" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.644658 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.644763 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.647680 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.648028 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.666749 4847 scope.go:117] "RemoveContainer" containerID="8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.667506 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15\": container with ID starting with 8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15 not found: ID does not exist" containerID="8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.667544 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15"} err="failed to get container status \"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15\": rpc error: code = NotFound desc = could not find container \"8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15\": container with ID starting with 8818e9aa69cbc8fa5cbfe564f7c7fd46985cfb7227b50c32fab976fbf5b22c15 not found: ID does not exist" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.667571 4847 scope.go:117] "RemoveContainer" containerID="7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da" Feb 18 00:51:21 crc kubenswrapper[4847]: E0218 00:51:21.669780 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da\": container with ID starting with 7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da not found: ID does not exist" containerID="7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.669847 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da"} err="failed to get container status \"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da\": rpc error: code = NotFound desc = could not find container \"7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da\": container with ID starting with 7050ac75e233f7a088aca1280b3d86cf1ab71b018326c97835730737ae4122da not found: ID does not exist" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.726746 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-config-data\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.727081 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772k2\" (UniqueName: \"kubernetes.io/projected/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-kube-api-access-772k2\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.727305 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-logs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.727551 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.727714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.829981 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.830370 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.830503 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-config-data\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.830664 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772k2\" (UniqueName: \"kubernetes.io/projected/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-kube-api-access-772k2\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.830791 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-logs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.831387 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-logs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.835890 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.836129 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-config-data\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.837802 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.849809 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772k2\" (UniqueName: \"kubernetes.io/projected/d7d31ecb-9f5f-42bf-be6a-9e97c594247a-kube-api-access-772k2\") pod \"nova-metadata-0\" (UID: \"d7d31ecb-9f5f-42bf-be6a-9e97c594247a\") " pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.923860 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.943979 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.955302 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.956992 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.959682 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.960169 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.960169 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.966029 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 18 00:51:21 crc kubenswrapper[4847]: I0218 00:51:21.968922 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035129 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22n6\" (UniqueName: \"kubernetes.io/projected/6fc0b03b-36f3-47d5-bdce-65a09774bf93-kube-api-access-p22n6\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035194 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-public-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035254 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035334 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc0b03b-36f3-47d5-bdce-65a09774bf93-logs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035370 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-config-data\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.035392 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.138055 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p22n6\" (UniqueName: \"kubernetes.io/projected/6fc0b03b-36f3-47d5-bdce-65a09774bf93-kube-api-access-p22n6\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.139040 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-public-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.140055 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.140177 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc0b03b-36f3-47d5-bdce-65a09774bf93-logs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.140226 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-config-data\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.140247 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.144173 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-public-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.144433 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fc0b03b-36f3-47d5-bdce-65a09774bf93-logs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.147671 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-config-data\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.148188 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.148817 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fc0b03b-36f3-47d5-bdce-65a09774bf93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.154624 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22n6\" (UniqueName: \"kubernetes.io/projected/6fc0b03b-36f3-47d5-bdce-65a09774bf93-kube-api-access-p22n6\") pod \"nova-api-0\" (UID: \"6fc0b03b-36f3-47d5-bdce-65a09774bf93\") " pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.272318 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.427920 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 18 00:51:22 crc kubenswrapper[4847]: W0218 00:51:22.438408 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7d31ecb_9f5f_42bf_be6a_9e97c594247a.slice/crio-7728d286e25d6b47d83c5999597440053755490749e65d2dd60726c397de5c29 WatchSource:0}: Error finding container 7728d286e25d6b47d83c5999597440053755490749e65d2dd60726c397de5c29: Status 404 returned error can't find the container with id 7728d286e25d6b47d83c5999597440053755490749e65d2dd60726c397de5c29 Feb 18 00:51:22 crc kubenswrapper[4847]: W0218 00:51:22.796801 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc0b03b_36f3_47d5_bdce_65a09774bf93.slice/crio-f2a93f0d84d4592d894746b4048675e4b3bbb08f5cb8f28342e2c695403a9a41 WatchSource:0}: Error finding container f2a93f0d84d4592d894746b4048675e4b3bbb08f5cb8f28342e2c695403a9a41: Status 404 returned error can't find the container with id f2a93f0d84d4592d894746b4048675e4b3bbb08f5cb8f28342e2c695403a9a41 Feb 18 00:51:22 crc kubenswrapper[4847]: I0218 00:51:22.804473 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.329001 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7d31ecb-9f5f-42bf-be6a-9e97c594247a","Type":"ContainerStarted","Data":"ec3e706a26c8f7f5927b5088bc7cad0c489004a85df7f05699e1af7b8833c71e"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.329385 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7d31ecb-9f5f-42bf-be6a-9e97c594247a","Type":"ContainerStarted","Data":"f98cf16a966d3f245614e3b415274a45795e917fc580604f7b3a5d4d22795df4"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.329396 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7d31ecb-9f5f-42bf-be6a-9e97c594247a","Type":"ContainerStarted","Data":"7728d286e25d6b47d83c5999597440053755490749e65d2dd60726c397de5c29"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.334621 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fc0b03b-36f3-47d5-bdce-65a09774bf93","Type":"ContainerStarted","Data":"a9a0d7815e52ff416ece87ba9363a0303f4bdec1ff715f0bad3c3bd014721af2"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.334735 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fc0b03b-36f3-47d5-bdce-65a09774bf93","Type":"ContainerStarted","Data":"349ffbe6a7fefaaa0d9fb91029d4c6f637cede29ab477bd74a6b44ad2e1e7821"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.334785 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6fc0b03b-36f3-47d5-bdce-65a09774bf93","Type":"ContainerStarted","Data":"f2a93f0d84d4592d894746b4048675e4b3bbb08f5cb8f28342e2c695403a9a41"} Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.377901 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.377880601 podStartE2EDuration="2.377880601s" podCreationTimestamp="2026-02-18 00:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:23.375659598 +0000 UTC m=+1556.753010550" watchObservedRunningTime="2026-02-18 00:51:23.377880601 +0000 UTC m=+1556.755231553" Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.389822 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.3898009780000002 podStartE2EDuration="2.389800978s" podCreationTimestamp="2026-02-18 00:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:51:23.354324793 +0000 UTC m=+1556.731675745" watchObservedRunningTime="2026-02-18 00:51:23.389800978 +0000 UTC m=+1556.767151930" Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.418434 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efc747e-2d6f-4489-a0ed-aca538e54574" path="/var/lib/kubelet/pods/9efc747e-2d6f-4489-a0ed-aca538e54574/volumes" Feb 18 00:51:23 crc kubenswrapper[4847]: I0218 00:51:23.686417 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.283802 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.371756 4847 generic.go:334] "Generic (PLEG): container finished" podID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerID="6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800" exitCode=137 Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.371829 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.372152 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerDied","Data":"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800"} Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.372203 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"0c32cae7-3099-475a-b844-0c4b66a5f4ff","Type":"ContainerDied","Data":"091a063525e3d4be8e94175caa516ee092ad86a224bf7be26b8217cdae6c0254"} Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.372228 4847 scope.go:117] "RemoveContainer" containerID="6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.395890 4847 scope.go:117] "RemoveContainer" containerID="d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.407149 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts\") pod \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.407306 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle\") pod \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.407388 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data\") pod \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.407470 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sffxs\" (UniqueName: \"kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs\") pod \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\" (UID: \"0c32cae7-3099-475a-b844-0c4b66a5f4ff\") " Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.413788 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts" (OuterVolumeSpecName: "scripts") pod "0c32cae7-3099-475a-b844-0c4b66a5f4ff" (UID: "0c32cae7-3099-475a-b844-0c4b66a5f4ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.413999 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs" (OuterVolumeSpecName: "kube-api-access-sffxs") pod "0c32cae7-3099-475a-b844-0c4b66a5f4ff" (UID: "0c32cae7-3099-475a-b844-0c4b66a5f4ff"). InnerVolumeSpecName "kube-api-access-sffxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.426792 4847 scope.go:117] "RemoveContainer" containerID="1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.511429 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.511456 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sffxs\" (UniqueName: \"kubernetes.io/projected/0c32cae7-3099-475a-b844-0c4b66a5f4ff-kube-api-access-sffxs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.518545 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data" (OuterVolumeSpecName: "config-data") pod "0c32cae7-3099-475a-b844-0c4b66a5f4ff" (UID: "0c32cae7-3099-475a-b844-0c4b66a5f4ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.549254 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c32cae7-3099-475a-b844-0c4b66a5f4ff" (UID: "0c32cae7-3099-475a-b844-0c4b66a5f4ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.610791 4847 scope.go:117] "RemoveContainer" containerID="2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.612980 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.613019 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c32cae7-3099-475a-b844-0c4b66a5f4ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.632426 4847 scope.go:117] "RemoveContainer" containerID="6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.632864 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800\": container with ID starting with 6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800 not found: ID does not exist" containerID="6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.632917 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800"} err="failed to get container status \"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800\": rpc error: code = NotFound desc = could not find container \"6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800\": container with ID starting with 6fd80562dfe52dec8e37a5e0187ce152ddbc00f3c11cf8b4fa22598e2a264800 not found: ID does not exist" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.632948 4847 scope.go:117] "RemoveContainer" containerID="d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.633218 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88\": container with ID starting with d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88 not found: ID does not exist" containerID="d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.633240 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88"} err="failed to get container status \"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88\": rpc error: code = NotFound desc = could not find container \"d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88\": container with ID starting with d4ed6f49ff08ca869c4203e4430dcca48f111ea620d4d62fb1c5a0a968389b88 not found: ID does not exist" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.633269 4847 scope.go:117] "RemoveContainer" containerID="1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.633839 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5\": container with ID starting with 1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5 not found: ID does not exist" containerID="1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.633855 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5"} err="failed to get container status \"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5\": rpc error: code = NotFound desc = could not find container \"1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5\": container with ID starting with 1e495f030bc4a6bda4b45b0cbc5b919c2ab1f241f71cdd5cd50b2b2b26bd9aa5 not found: ID does not exist" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.633869 4847 scope.go:117] "RemoveContainer" containerID="2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.634180 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f\": container with ID starting with 2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f not found: ID does not exist" containerID="2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.634226 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f"} err="failed to get container status \"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f\": rpc error: code = NotFound desc = could not find container \"2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f\": container with ID starting with 2534f5fc438c559c89f6d6d08e223a52da853bb5eaf6ad57aebbda87c342e31f not found: ID does not exist" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.713221 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.720881 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.736435 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.736898 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-notifier" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.736916 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-notifier" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.736930 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-api" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.736937 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-api" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.736956 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-listener" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.736962 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-listener" Feb 18 00:51:24 crc kubenswrapper[4847]: E0218 00:51:24.736988 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-evaluator" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.736994 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-evaluator" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.737204 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-listener" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.737222 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-api" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.737231 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-evaluator" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.737248 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" containerName="aodh-notifier" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.739223 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.741649 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.741959 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.742298 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-9sw76" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.743161 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.744592 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.757455 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.816385 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hjc\" (UniqueName: \"kubernetes.io/projected/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-kube-api-access-46hjc\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.816447 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-internal-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.816553 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-config-data\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.816821 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-scripts\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.816989 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.817019 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-public-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919236 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-config-data\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919347 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-scripts\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919404 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919424 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-public-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919463 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46hjc\" (UniqueName: \"kubernetes.io/projected/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-kube-api-access-46hjc\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.919485 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-internal-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.932299 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-scripts\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.932307 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-public-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.932371 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-combined-ca-bundle\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.932803 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-internal-tls-certs\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.933347 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-config-data\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:24 crc kubenswrapper[4847]: I0218 00:51:24.937564 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46hjc\" (UniqueName: \"kubernetes.io/projected/312a9ba8-6259-4db9-b9e3-9d6b7912c6ba-kube-api-access-46hjc\") pod \"aodh-0\" (UID: \"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba\") " pod="openstack/aodh-0" Feb 18 00:51:25 crc kubenswrapper[4847]: I0218 00:51:25.056333 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 18 00:51:25 crc kubenswrapper[4847]: I0218 00:51:25.404691 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:51:25 crc kubenswrapper[4847]: E0218 00:51:25.405995 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:51:25 crc kubenswrapper[4847]: I0218 00:51:25.418452 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c32cae7-3099-475a-b844-0c4b66a5f4ff" path="/var/lib/kubelet/pods/0c32cae7-3099-475a-b844-0c4b66a5f4ff/volumes" Feb 18 00:51:25 crc kubenswrapper[4847]: I0218 00:51:25.605272 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 18 00:51:25 crc kubenswrapper[4847]: W0218 00:51:25.610931 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312a9ba8_6259_4db9_b9e3_9d6b7912c6ba.slice/crio-4cb2f7208a686400105b2ce2e023a5033e3ecc6ced9a69a11a716331b4765225 WatchSource:0}: Error finding container 4cb2f7208a686400105b2ce2e023a5033e3ecc6ced9a69a11a716331b4765225: Status 404 returned error can't find the container with id 4cb2f7208a686400105b2ce2e023a5033e3ecc6ced9a69a11a716331b4765225 Feb 18 00:51:26 crc kubenswrapper[4847]: I0218 00:51:26.399495 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba","Type":"ContainerStarted","Data":"0cfef401aeb477967dac0b2b7fe6a25df99a140baaba99ae472e61966b201164"} Feb 18 00:51:26 crc kubenswrapper[4847]: I0218 00:51:26.399951 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba","Type":"ContainerStarted","Data":"4cb2f7208a686400105b2ce2e023a5033e3ecc6ced9a69a11a716331b4765225"} Feb 18 00:51:26 crc kubenswrapper[4847]: I0218 00:51:26.966663 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:51:26 crc kubenswrapper[4847]: I0218 00:51:26.967067 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 18 00:51:27 crc kubenswrapper[4847]: I0218 00:51:27.437791 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba","Type":"ContainerStarted","Data":"96de11cbed5e033d812f6d628187b0151f8ea4bf859244df26621599a3b30007"} Feb 18 00:51:28 crc kubenswrapper[4847]: I0218 00:51:28.440351 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba","Type":"ContainerStarted","Data":"56bfb1f8e20cd37da923c83f8dbbded8f08b2382c926958defc79ce1a3d3d73c"} Feb 18 00:51:28 crc kubenswrapper[4847]: I0218 00:51:28.686384 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 18 00:51:28 crc kubenswrapper[4847]: I0218 00:51:28.720011 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 18 00:51:29 crc kubenswrapper[4847]: I0218 00:51:29.454741 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"312a9ba8-6259-4db9-b9e3-9d6b7912c6ba","Type":"ContainerStarted","Data":"255d2b91edebae49c4cbeaf6e6de43025af49d3c6477ac0a2fbe7e556c7e5801"} Feb 18 00:51:29 crc kubenswrapper[4847]: I0218 00:51:29.502144 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.665696627 podStartE2EDuration="5.502113133s" podCreationTimestamp="2026-02-18 00:51:24 +0000 UTC" firstStartedPulling="2026-02-18 00:51:25.614481691 +0000 UTC m=+1558.991832623" lastFinishedPulling="2026-02-18 00:51:28.450898187 +0000 UTC m=+1561.828249129" observedRunningTime="2026-02-18 00:51:29.47956741 +0000 UTC m=+1562.856918442" watchObservedRunningTime="2026-02-18 00:51:29.502113133 +0000 UTC m=+1562.879464095" Feb 18 00:51:29 crc kubenswrapper[4847]: I0218 00:51:29.578935 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 18 00:51:31 crc kubenswrapper[4847]: I0218 00:51:31.966442 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:51:31 crc kubenswrapper[4847]: I0218 00:51:31.966842 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 18 00:51:32 crc kubenswrapper[4847]: I0218 00:51:32.272731 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:51:32 crc kubenswrapper[4847]: I0218 00:51:32.272794 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 18 00:51:32 crc kubenswrapper[4847]: I0218 00:51:32.994870 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d7d31ecb-9f5f-42bf-be6a-9e97c594247a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:32 crc kubenswrapper[4847]: I0218 00:51:32.994878 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d7d31ecb-9f5f-42bf-be6a-9e97c594247a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.247:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:33 crc kubenswrapper[4847]: I0218 00:51:33.286769 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6fc0b03b-36f3-47d5-bdce-65a09774bf93" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.248:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:33 crc kubenswrapper[4847]: I0218 00:51:33.286787 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6fc0b03b-36f3-47d5-bdce-65a09774bf93" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.248:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 18 00:51:37 crc kubenswrapper[4847]: I0218 00:51:37.414743 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:51:37 crc kubenswrapper[4847]: E0218 00:51:37.415948 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:51:41 crc kubenswrapper[4847]: I0218 00:51:41.978079 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:51:41 crc kubenswrapper[4847]: I0218 00:51:41.980576 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 18 00:51:41 crc kubenswrapper[4847]: I0218 00:51:41.987492 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:51:41 crc kubenswrapper[4847]: I0218 00:51:41.990708 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.282813 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.283474 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.284165 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.292824 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.674459 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 18 00:51:42 crc kubenswrapper[4847]: I0218 00:51:42.685874 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 18 00:51:43 crc kubenswrapper[4847]: I0218 00:51:43.518672 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:51:51 crc kubenswrapper[4847]: I0218 00:51:51.404801 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:51:51 crc kubenswrapper[4847]: E0218 00:51:51.406022 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.379150 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-znxsz"] Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.391772 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-znxsz"] Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.422051 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014e96ac-8dcb-4d73-a9e1-1ade26742005" path="/var/lib/kubelet/pods/014e96ac-8dcb-4d73-a9e1-1ade26742005/volumes" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.511298 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-k4t5r"] Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.513937 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.543938 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k4t5r"] Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.616858 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-combined-ca-bundle\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.616937 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-config-data\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.616959 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwt2\" (UniqueName: \"kubernetes.io/projected/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-kube-api-access-vjwt2\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.718851 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-combined-ca-bundle\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.718963 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-config-data\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.719001 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwt2\" (UniqueName: \"kubernetes.io/projected/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-kube-api-access-vjwt2\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.744683 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-config-data\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.745390 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-combined-ca-bundle\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.748197 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwt2\" (UniqueName: \"kubernetes.io/projected/452f74c1-fa5f-464b-9943-a4a1c2d5c48a-kube-api-access-vjwt2\") pod \"heat-db-sync-k4t5r\" (UID: \"452f74c1-fa5f-464b-9943-a4a1c2d5c48a\") " pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:55 crc kubenswrapper[4847]: I0218 00:51:55.849405 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-k4t5r" Feb 18 00:51:56 crc kubenswrapper[4847]: I0218 00:51:56.329415 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-k4t5r"] Feb 18 00:51:56 crc kubenswrapper[4847]: E0218 00:51:56.443218 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:51:56 crc kubenswrapper[4847]: E0218 00:51:56.443272 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:51:56 crc kubenswrapper[4847]: E0218 00:51:56.443391 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:51:56 crc kubenswrapper[4847]: E0218 00:51:56.444696 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:51:56 crc kubenswrapper[4847]: I0218 00:51:56.854548 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-k4t5r" event={"ID":"452f74c1-fa5f-464b-9943-a4a1c2d5c48a","Type":"ContainerStarted","Data":"529edc4fd407ebe15713e5e5c4242fbe022ebf392cafa6cee6ccce30f329559f"} Feb 18 00:51:56 crc kubenswrapper[4847]: E0218 00:51:56.858092 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.594844 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.705124 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.705709 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-central-agent" containerID="cri-o://3ea2820168a8de51c02a4f24b4add952ccb5457d1e7772e6e8c533a559ebc60b" gracePeriod=30 Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.706187 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="proxy-httpd" containerID="cri-o://3762d3bdfb43664204c4ac87da22ae93f428cd087f160ef6a0509417461d225f" gracePeriod=30 Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.706237 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="sg-core" containerID="cri-o://bf16dec9289475e1b58924b3fcb59776a5f8705c970fce17296c1e5a82cdd2c5" gracePeriod=30 Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.706274 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-notification-agent" containerID="cri-o://d348f31e549a45daa9b07e9273e5b941de4bacfc801b3d6868e71d7edeffa6af" gracePeriod=30 Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.866558 4847 generic.go:334] "Generic (PLEG): container finished" podID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerID="bf16dec9289475e1b58924b3fcb59776a5f8705c970fce17296c1e5a82cdd2c5" exitCode=2 Feb 18 00:51:57 crc kubenswrapper[4847]: I0218 00:51:57.866632 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerDied","Data":"bf16dec9289475e1b58924b3fcb59776a5f8705c970fce17296c1e5a82cdd2c5"} Feb 18 00:51:57 crc kubenswrapper[4847]: E0218 00:51:57.868182 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.704795 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877763 4847 generic.go:334] "Generic (PLEG): container finished" podID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerID="3762d3bdfb43664204c4ac87da22ae93f428cd087f160ef6a0509417461d225f" exitCode=0 Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877796 4847 generic.go:334] "Generic (PLEG): container finished" podID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerID="d348f31e549a45daa9b07e9273e5b941de4bacfc801b3d6868e71d7edeffa6af" exitCode=0 Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877805 4847 generic.go:334] "Generic (PLEG): container finished" podID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerID="3ea2820168a8de51c02a4f24b4add952ccb5457d1e7772e6e8c533a559ebc60b" exitCode=0 Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877820 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerDied","Data":"3762d3bdfb43664204c4ac87da22ae93f428cd087f160ef6a0509417461d225f"} Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877879 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerDied","Data":"d348f31e549a45daa9b07e9273e5b941de4bacfc801b3d6868e71d7edeffa6af"} Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877897 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerDied","Data":"3ea2820168a8de51c02a4f24b4add952ccb5457d1e7772e6e8c533a559ebc60b"} Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877910 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"108d0d51-c527-4d5b-8129-0e0df3e355c2","Type":"ContainerDied","Data":"fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976"} Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.877921 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb101c592f7e794c10891a40331106f88f2c21989141402800e5f788321d976" Feb 18 00:51:58 crc kubenswrapper[4847]: I0218 00:51:58.897485 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094205 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq2qp\" (UniqueName: \"kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094289 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094318 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094345 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094362 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094414 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094439 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.094522 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data\") pod \"108d0d51-c527-4d5b-8129-0e0df3e355c2\" (UID: \"108d0d51-c527-4d5b-8129-0e0df3e355c2\") " Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.096413 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.097120 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.113790 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp" (OuterVolumeSpecName: "kube-api-access-bq2qp") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "kube-api-access-bq2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.129109 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts" (OuterVolumeSpecName: "scripts") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.144012 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.209934 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211865 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq2qp\" (UniqueName: \"kubernetes.io/projected/108d0d51-c527-4d5b-8129-0e0df3e355c2-kube-api-access-bq2qp\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211913 4847 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211932 4847 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211942 4847 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/108d0d51-c527-4d5b-8129-0e0df3e355c2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211951 4847 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-scripts\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.211961 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.223476 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.289295 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data" (OuterVolumeSpecName: "config-data") pod "108d0d51-c527-4d5b-8129-0e0df3e355c2" (UID: "108d0d51-c527-4d5b-8129-0e0df3e355c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.314080 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.314113 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108d0d51-c527-4d5b-8129-0e0df3e355c2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.886427 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.915407 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.924926 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.940583 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:51:59 crc kubenswrapper[4847]: E0218 00:51:59.941065 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-notification-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941082 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-notification-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: E0218 00:51:59.941104 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="sg-core" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941113 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="sg-core" Feb 18 00:51:59 crc kubenswrapper[4847]: E0218 00:51:59.941133 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-central-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941139 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-central-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: E0218 00:51:59.941158 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="proxy-httpd" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941164 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="proxy-httpd" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941363 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="proxy-httpd" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941392 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-central-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941406 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="sg-core" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.941415 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" containerName="ceilometer-notification-agent" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.944980 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.951690 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.951959 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.958958 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 18 00:51:59 crc kubenswrapper[4847]: I0218 00:51:59.970437 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.027984 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-run-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028081 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028150 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-config-data\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028176 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028202 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-scripts\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028242 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028327 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c4d4\" (UniqueName: \"kubernetes.io/projected/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-kube-api-access-6c4d4\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.028351 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-log-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.129981 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130067 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-config-data\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130097 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130123 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-scripts\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130164 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130215 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c4d4\" (UniqueName: \"kubernetes.io/projected/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-kube-api-access-6c4d4\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130246 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-log-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130269 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-run-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.130738 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-run-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.131048 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-log-httpd\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.137504 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.137789 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.140523 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-config-data\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.141259 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-scripts\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.146138 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.148121 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c4d4\" (UniqueName: \"kubernetes.io/projected/7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6-kube-api-access-6c4d4\") pod \"ceilometer-0\" (UID: \"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6\") " pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.260193 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.804952 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 18 00:52:00 crc kubenswrapper[4847]: I0218 00:52:00.899042 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6","Type":"ContainerStarted","Data":"7bb6976d460467195e2e751fdcbc538a4ff81723c72925487b286ee176e02426"} Feb 18 00:52:00 crc kubenswrapper[4847]: E0218 00:52:00.910258 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:00 crc kubenswrapper[4847]: E0218 00:52:00.910318 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:00 crc kubenswrapper[4847]: E0218 00:52:00.910456 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:52:01 crc kubenswrapper[4847]: I0218 00:52:01.418105 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108d0d51-c527-4d5b-8129-0e0df3e355c2" path="/var/lib/kubelet/pods/108d0d51-c527-4d5b-8129-0e0df3e355c2/volumes" Feb 18 00:52:01 crc kubenswrapper[4847]: I0218 00:52:01.911134 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6","Type":"ContainerStarted","Data":"1c272ae79606767292f84aad12173f72861eb6417887de3b81bac4fc95df540a"} Feb 18 00:52:02 crc kubenswrapper[4847]: I0218 00:52:02.097946 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="rabbitmq" containerID="cri-o://ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece" gracePeriod=604796 Feb 18 00:52:02 crc kubenswrapper[4847]: I0218 00:52:02.404655 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:52:02 crc kubenswrapper[4847]: E0218 00:52:02.404913 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:52:02 crc kubenswrapper[4847]: I0218 00:52:02.929754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6","Type":"ContainerStarted","Data":"bd245b7d177bafb337de59316261bc75d07923fb1752846bd914933e5c6b399b"} Feb 18 00:52:03 crc kubenswrapper[4847]: I0218 00:52:03.238196 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="rabbitmq" containerID="cri-o://fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5" gracePeriod=604796 Feb 18 00:52:03 crc kubenswrapper[4847]: E0218 00:52:03.879249 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:03 crc kubenswrapper[4847]: I0218 00:52:03.944307 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6","Type":"ContainerStarted","Data":"ec29a975ece58c7cf0eda30d79a208399827f554576adcd50a45ee39f71fbe30"} Feb 18 00:52:03 crc kubenswrapper[4847]: I0218 00:52:03.944490 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 18 00:52:03 crc kubenswrapper[4847]: E0218 00:52:03.946423 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:04 crc kubenswrapper[4847]: E0218 00:52:04.957716 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:08 crc kubenswrapper[4847]: I0218 00:52:08.871432 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.021613 4847 generic.go:334] "Generic (PLEG): container finished" podID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerID="ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece" exitCode=0 Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.021664 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerDied","Data":"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece"} Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.021695 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1977a705-30e5-456c-8e2c-2cd05e0325e3","Type":"ContainerDied","Data":"4ccbcecacfb9a51bcb7fb2da73c21f4c45da56444eecabb0c00d34518f0e2f18"} Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.021699 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.021715 4847 scope.go:117] "RemoveContainer" containerID="ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.053305 4847 scope.go:117] "RemoveContainer" containerID="d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.055983 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbfpz\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056079 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056141 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056245 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056306 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056334 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056406 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056435 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056466 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056508 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.056558 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins\") pod \"1977a705-30e5-456c-8e2c-2cd05e0325e3\" (UID: \"1977a705-30e5-456c-8e2c-2cd05e0325e3\") " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.057211 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.057669 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.058233 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.062241 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz" (OuterVolumeSpecName: "kube-api-access-wbfpz") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "kube-api-access-wbfpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.092069 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.092176 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info" (OuterVolumeSpecName: "pod-info") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.092235 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.101247 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.116742 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data" (OuterVolumeSpecName: "config-data") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160076 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160135 4847 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160145 4847 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1977a705-30e5-456c-8e2c-2cd05e0325e3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160155 4847 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160164 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160173 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbfpz\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-kube-api-access-wbfpz\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160182 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160195 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.160203 4847 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1977a705-30e5-456c-8e2c-2cd05e0325e3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.161424 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf" (OuterVolumeSpecName: "server-conf") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.174743 4847 scope.go:117] "RemoveContainer" containerID="ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece" Feb 18 00:52:09 crc kubenswrapper[4847]: E0218 00:52:09.175140 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece\": container with ID starting with ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece not found: ID does not exist" containerID="ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.175185 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece"} err="failed to get container status \"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece\": rpc error: code = NotFound desc = could not find container \"ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece\": container with ID starting with ad13e112489f27c6e7a5d7aa7d1ca78cb7cf9f788e81a258f04289d28ce72ece not found: ID does not exist" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.175211 4847 scope.go:117] "RemoveContainer" containerID="d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918" Feb 18 00:52:09 crc kubenswrapper[4847]: E0218 00:52:09.175998 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918\": container with ID starting with d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918 not found: ID does not exist" containerID="d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.176045 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918"} err="failed to get container status \"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918\": rpc error: code = NotFound desc = could not find container \"d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918\": container with ID starting with d96bb8e16fe87474f7d51baf5d2ee2d7beb30a197c7d10da3871934e6475e918 not found: ID does not exist" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.192891 4847 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.220884 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1977a705-30e5-456c-8e2c-2cd05e0325e3" (UID: "1977a705-30e5-456c-8e2c-2cd05e0325e3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.265862 4847 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.266144 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1977a705-30e5-456c-8e2c-2cd05e0325e3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.266216 4847 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1977a705-30e5-456c-8e2c-2cd05e0325e3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.434416 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.434452 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.470563 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:52:09 crc kubenswrapper[4847]: E0218 00:52:09.471469 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="setup-container" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.471495 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="setup-container" Feb 18 00:52:09 crc kubenswrapper[4847]: E0218 00:52:09.471530 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="rabbitmq" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.471537 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="rabbitmq" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.471811 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" containerName="rabbitmq" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.473038 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.482594 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.482713 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9s2h" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.482871 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.482992 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.483093 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.483209 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.483383 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.506227 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:52:09 crc kubenswrapper[4847]: E0218 00:52:09.603318 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1977a705_30e5_456c_8e2c_2cd05e0325e3.slice\": RecentStats: unable to find data in memory cache]" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675643 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675694 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675762 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b19ac705-a85b-44ee-86c9-c31b23d988c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675782 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675805 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675853 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675890 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675928 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675952 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.675984 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b19ac705-a85b-44ee-86c9-c31b23d988c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.676013 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjkl6\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-kube-api-access-wjkl6\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778097 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778174 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778256 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778322 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778379 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778415 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778457 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b19ac705-a85b-44ee-86c9-c31b23d988c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778500 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjkl6\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-kube-api-access-wjkl6\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778535 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778558 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.778637 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b19ac705-a85b-44ee-86c9-c31b23d988c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.779672 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-config-data\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.779968 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.780228 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.781573 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.781651 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b19ac705-a85b-44ee-86c9-c31b23d988c0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.782000 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.785001 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b19ac705-a85b-44ee-86c9-c31b23d988c0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.785061 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.785650 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.789129 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b19ac705-a85b-44ee-86c9-c31b23d988c0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.796203 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjkl6\" (UniqueName: \"kubernetes.io/projected/b19ac705-a85b-44ee-86c9-c31b23d988c0-kube-api-access-wjkl6\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.833292 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b19ac705-a85b-44ee-86c9-c31b23d988c0\") " pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.910630 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 18 00:52:09 crc kubenswrapper[4847]: I0218 00:52:09.930168 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.054439 4847 generic.go:334] "Generic (PLEG): container finished" podID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerID="fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5" exitCode=0 Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.054503 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerDied","Data":"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5"} Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.054593 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d","Type":"ContainerDied","Data":"9b35fc296847a7d807fb2aac46813feba648402ca15b83acc1edd79eaab3903a"} Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.054536 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.054630 4847 scope.go:117] "RemoveContainer" containerID="fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089260 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnnf2\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089329 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089349 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089428 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089465 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089483 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089517 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089592 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089623 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089644 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.089710 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info\") pod \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\" (UID: \"d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d\") " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.090436 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.091184 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.091589 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.092375 4847 scope.go:117] "RemoveContainer" containerID="fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.095791 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.096179 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.101321 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info" (OuterVolumeSpecName: "pod-info") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.104065 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2" (OuterVolumeSpecName: "kube-api-access-fnnf2") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "kube-api-access-fnnf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.117789 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.139808 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data" (OuterVolumeSpecName: "config-data") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.158875 4847 scope.go:117] "RemoveContainer" containerID="fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.159368 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5\": container with ID starting with fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5 not found: ID does not exist" containerID="fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.159399 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5"} err="failed to get container status \"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5\": rpc error: code = NotFound desc = could not find container \"fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5\": container with ID starting with fda6b12f005508eb0112f2dacc57ae455df31692e964784c806829cf8f822ff5 not found: ID does not exist" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.159421 4847 scope.go:117] "RemoveContainer" containerID="fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.159636 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9\": container with ID starting with fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9 not found: ID does not exist" containerID="fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.159657 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9"} err="failed to get container status \"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9\": rpc error: code = NotFound desc = could not find container \"fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9\": container with ID starting with fd125797db78eb9c1069ec9e94328c327c2fce1794180d3c76711691cd2e7ec9 not found: ID does not exist" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.175141 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf" (OuterVolumeSpecName: "server-conf") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192166 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192193 4847 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192220 4847 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192230 4847 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-server-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192239 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192248 4847 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192256 4847 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-pod-info\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192264 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnnf2\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-kube-api-access-fnnf2\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192271 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.192278 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.222444 4847 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.230088 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" (UID: "d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.294624 4847 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.294656 4847 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.397350 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.407856 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.425053 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.425546 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="setup-container" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.425563 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="setup-container" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.425580 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="rabbitmq" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.425587 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="rabbitmq" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.425790 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" containerName="rabbitmq" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.426916 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.432202 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.432735 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.432902 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.433040 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qnvvw" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.433173 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.433302 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.433489 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.444803 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:52:10 crc kubenswrapper[4847]: W0218 00:52:10.521149 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb19ac705_a85b_44ee_86c9_c31b23d988c0.slice/crio-575e5f54c147ec59587b886bb7cc75f7114ba0a3c8e585c0548ad8e2e4792d8a WatchSource:0}: Error finding container 575e5f54c147ec59587b886bb7cc75f7114ba0a3c8e585c0548ad8e2e4792d8a: Status 404 returned error can't find the container with id 575e5f54c147ec59587b886bb7cc75f7114ba0a3c8e585c0548ad8e2e4792d8a Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.526667 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.532765 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.532817 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.532920 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:52:10 crc kubenswrapper[4847]: E0218 00:52:10.534318 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599670 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599723 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599768 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599798 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599818 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599852 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5k7k\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-kube-api-access-l5k7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599872 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599890 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599909 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fac01d88-c41a-44cd-97e2-34d58a619ba1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599936 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.599996 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fac01d88-c41a-44cd-97e2-34d58a619ba1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702040 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702112 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702134 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702153 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702169 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5k7k\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-kube-api-access-l5k7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702189 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702211 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702231 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fac01d88-c41a-44cd-97e2-34d58a619ba1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702260 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702321 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fac01d88-c41a-44cd-97e2-34d58a619ba1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.702391 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.703330 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.703556 4847 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.704081 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.704525 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.704616 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.705085 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fac01d88-c41a-44cd-97e2-34d58a619ba1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.708028 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.709035 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fac01d88-c41a-44cd-97e2-34d58a619ba1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.711753 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fac01d88-c41a-44cd-97e2-34d58a619ba1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.721765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.724378 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5k7k\" (UniqueName: \"kubernetes.io/projected/fac01d88-c41a-44cd-97e2-34d58a619ba1-kube-api-access-l5k7k\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.742387 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"fac01d88-c41a-44cd-97e2-34d58a619ba1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:10 crc kubenswrapper[4847]: I0218 00:52:10.799159 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:11 crc kubenswrapper[4847]: I0218 00:52:11.069083 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b19ac705-a85b-44ee-86c9-c31b23d988c0","Type":"ContainerStarted","Data":"575e5f54c147ec59587b886bb7cc75f7114ba0a3c8e585c0548ad8e2e4792d8a"} Feb 18 00:52:11 crc kubenswrapper[4847]: I0218 00:52:11.324216 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 18 00:52:11 crc kubenswrapper[4847]: I0218 00:52:11.420473 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1977a705-30e5-456c-8e2c-2cd05e0325e3" path="/var/lib/kubelet/pods/1977a705-30e5-456c-8e2c-2cd05e0325e3/volumes" Feb 18 00:52:11 crc kubenswrapper[4847]: I0218 00:52:11.422072 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d" path="/var/lib/kubelet/pods/d6bf48e5-a0ac-49f3-a35c-d17a39a35a9d/volumes" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.083124 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fac01d88-c41a-44cd-97e2-34d58a619ba1","Type":"ContainerStarted","Data":"6371b8466f0bf780d70b10d2d2cab5124f3dde45ba187bb6713e7fed3811de32"} Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.390542 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.392320 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.395493 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.412638 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538440 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538541 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kls46\" (UniqueName: \"kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538580 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538633 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538666 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538773 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.538858 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640373 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640431 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kls46\" (UniqueName: \"kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640457 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640482 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640507 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640555 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.640629 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.641395 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.641541 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.641863 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.642055 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.642444 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.642444 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.670754 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kls46\" (UniqueName: \"kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46\") pod \"dnsmasq-dns-5b75489c6f-25phr\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:12 crc kubenswrapper[4847]: I0218 00:52:12.709240 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:13 crc kubenswrapper[4847]: I0218 00:52:13.095551 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b19ac705-a85b-44ee-86c9-c31b23d988c0","Type":"ContainerStarted","Data":"e159c591088b8e5de443a7fa65ed305bd243ebd544a43974358150457777be87"} Feb 18 00:52:13 crc kubenswrapper[4847]: I0218 00:52:13.290451 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:13 crc kubenswrapper[4847]: W0218 00:52:13.303739 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e08d7c_5d07_441c_89fc_c36a361b3086.slice/crio-8e185870fc5b2d6a5a3dd6d194bd44ef7b824d4c3ece85747a71ecbd8263695e WatchSource:0}: Error finding container 8e185870fc5b2d6a5a3dd6d194bd44ef7b824d4c3ece85747a71ecbd8263695e: Status 404 returned error can't find the container with id 8e185870fc5b2d6a5a3dd6d194bd44ef7b824d4c3ece85747a71ecbd8263695e Feb 18 00:52:14 crc kubenswrapper[4847]: I0218 00:52:14.109064 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fac01d88-c41a-44cd-97e2-34d58a619ba1","Type":"ContainerStarted","Data":"d4412d00d5764a1db6af3f5a08f087732e25a34d5c6dacc2be2467d13a403dbc"} Feb 18 00:52:14 crc kubenswrapper[4847]: I0218 00:52:14.115001 4847 generic.go:334] "Generic (PLEG): container finished" podID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerID="d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44" exitCode=0 Feb 18 00:52:14 crc kubenswrapper[4847]: I0218 00:52:14.115107 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" event={"ID":"76e08d7c-5d07-441c-89fc-c36a361b3086","Type":"ContainerDied","Data":"d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44"} Feb 18 00:52:14 crc kubenswrapper[4847]: I0218 00:52:14.115154 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" event={"ID":"76e08d7c-5d07-441c-89fc-c36a361b3086","Type":"ContainerStarted","Data":"8e185870fc5b2d6a5a3dd6d194bd44ef7b824d4c3ece85747a71ecbd8263695e"} Feb 18 00:52:14 crc kubenswrapper[4847]: I0218 00:52:14.405431 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:52:14 crc kubenswrapper[4847]: E0218 00:52:14.405717 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:52:15 crc kubenswrapper[4847]: I0218 00:52:15.128671 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" event={"ID":"76e08d7c-5d07-441c-89fc-c36a361b3086","Type":"ContainerStarted","Data":"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844"} Feb 18 00:52:15 crc kubenswrapper[4847]: I0218 00:52:15.129057 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:15 crc kubenswrapper[4847]: I0218 00:52:15.153324 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" podStartSLOduration=3.153299365 podStartE2EDuration="3.153299365s" podCreationTimestamp="2026-02-18 00:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:15.147650038 +0000 UTC m=+1608.525000970" watchObservedRunningTime="2026-02-18 00:52:15.153299365 +0000 UTC m=+1608.530650307" Feb 18 00:52:19 crc kubenswrapper[4847]: I0218 00:52:19.410783 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:52:19 crc kubenswrapper[4847]: I0218 00:52:19.424698 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 18 00:52:19 crc kubenswrapper[4847]: E0218 00:52:19.567041 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:19 crc kubenswrapper[4847]: E0218 00:52:19.567105 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:19 crc kubenswrapper[4847]: E0218 00:52:19.567242 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:52:19 crc kubenswrapper[4847]: E0218 00:52:19.568478 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:20 crc kubenswrapper[4847]: E0218 00:52:20.197204 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:22 crc kubenswrapper[4847]: I0218 00:52:22.710964 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:22 crc kubenswrapper[4847]: I0218 00:52:22.817405 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:52:22 crc kubenswrapper[4847]: I0218 00:52:22.817960 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="dnsmasq-dns" containerID="cri-o://8687f26c929ad42a1aeb726b2eb5122494297f92187014c7c153522bb3feaeef" gracePeriod=10 Feb 18 00:52:22 crc kubenswrapper[4847]: I0218 00:52:22.964643 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-zktfb"] Feb 18 00:52:22 crc kubenswrapper[4847]: I0218 00:52:22.967456 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.004936 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-zktfb"] Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.124745 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.124886 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-config\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.124917 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.124948 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.125004 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.125021 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwsrp\" (UniqueName: \"kubernetes.io/projected/644fa6a1-3d08-4fad-a252-7f1364d0b56e-kube-api-access-dwsrp\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.125179 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.227751 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.227930 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.228007 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-config\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.228038 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.228082 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.228152 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.228180 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwsrp\" (UniqueName: \"kubernetes.io/projected/644fa6a1-3d08-4fad-a252-7f1364d0b56e-kube-api-access-dwsrp\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.230121 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.230151 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-config\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.230237 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.230726 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.231199 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.232440 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/644fa6a1-3d08-4fad-a252-7f1364d0b56e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.250100 4847 generic.go:334] "Generic (PLEG): container finished" podID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerID="8687f26c929ad42a1aeb726b2eb5122494297f92187014c7c153522bb3feaeef" exitCode=0 Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.250155 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" event={"ID":"9c8be14a-5fbf-40ea-aa45-ea6b6474f281","Type":"ContainerDied","Data":"8687f26c929ad42a1aeb726b2eb5122494297f92187014c7c153522bb3feaeef"} Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.255200 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwsrp\" (UniqueName: \"kubernetes.io/projected/644fa6a1-3d08-4fad-a252-7f1364d0b56e-kube-api-access-dwsrp\") pod \"dnsmasq-dns-5cf7b6cbf7-zktfb\" (UID: \"644fa6a1-3d08-4fad-a252-7f1364d0b56e\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.318056 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:23 crc kubenswrapper[4847]: E0218 00:52:23.409988 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.456277 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534330 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534437 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534477 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534531 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534574 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.534685 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk9nz\" (UniqueName: \"kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz\") pod \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\" (UID: \"9c8be14a-5fbf-40ea-aa45-ea6b6474f281\") " Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.558992 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz" (OuterVolumeSpecName: "kube-api-access-mk9nz") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "kube-api-access-mk9nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.638012 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk9nz\" (UniqueName: \"kubernetes.io/projected/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-kube-api-access-mk9nz\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.746267 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.775154 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.797562 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.818658 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config" (OuterVolumeSpecName: "config") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.837527 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9c8be14a-5fbf-40ea-aa45-ea6b6474f281" (UID: "9c8be14a-5fbf-40ea-aa45-ea6b6474f281"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.847442 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.847485 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.847499 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.847513 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.847528 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c8be14a-5fbf-40ea-aa45-ea6b6474f281-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:23 crc kubenswrapper[4847]: I0218 00:52:23.896364 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-zktfb"] Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.275205 4847 generic.go:334] "Generic (PLEG): container finished" podID="644fa6a1-3d08-4fad-a252-7f1364d0b56e" containerID="1d5211bd2922f3a95e0a697ccf3caf43fc45ef30ee8be7dffff28db26677bcd4" exitCode=0 Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.275547 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" event={"ID":"644fa6a1-3d08-4fad-a252-7f1364d0b56e","Type":"ContainerDied","Data":"1d5211bd2922f3a95e0a697ccf3caf43fc45ef30ee8be7dffff28db26677bcd4"} Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.275579 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" event={"ID":"644fa6a1-3d08-4fad-a252-7f1364d0b56e","Type":"ContainerStarted","Data":"a240c040dbd0f9c831746fad78f99602f3740061ba7e665aa522bbb30eda21fe"} Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.284847 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" event={"ID":"9c8be14a-5fbf-40ea-aa45-ea6b6474f281","Type":"ContainerDied","Data":"3739451864dc258f92db7d4729cc68b6c96f0842770265f0da8ac7a030c30311"} Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.284908 4847 scope.go:117] "RemoveContainer" containerID="8687f26c929ad42a1aeb726b2eb5122494297f92187014c7c153522bb3feaeef" Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.285013 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-5tfxd" Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.451018 4847 scope.go:117] "RemoveContainer" containerID="10e3f7e4198522ee34cd7815d728d63ff8bc5a2c434c6680e89639e6b181c343" Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.481745 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:52:24 crc kubenswrapper[4847]: I0218 00:52:24.489525 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-5tfxd"] Feb 18 00:52:25 crc kubenswrapper[4847]: I0218 00:52:25.300471 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" event={"ID":"644fa6a1-3d08-4fad-a252-7f1364d0b56e","Type":"ContainerStarted","Data":"32385ed1ae0f80659d42bdc98cc514959a999033658df6941dd188a5e2e9b30c"} Feb 18 00:52:25 crc kubenswrapper[4847]: I0218 00:52:25.300972 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:25 crc kubenswrapper[4847]: I0218 00:52:25.325758 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" podStartSLOduration=3.325737621 podStartE2EDuration="3.325737621s" podCreationTimestamp="2026-02-18 00:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:25.323304852 +0000 UTC m=+1618.700655794" watchObservedRunningTime="2026-02-18 00:52:25.325737621 +0000 UTC m=+1618.703088563" Feb 18 00:52:25 crc kubenswrapper[4847]: I0218 00:52:25.405020 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:52:25 crc kubenswrapper[4847]: E0218 00:52:25.405397 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:52:25 crc kubenswrapper[4847]: I0218 00:52:25.420519 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" path="/var/lib/kubelet/pods/9c8be14a-5fbf-40ea-aa45-ea6b6474f281/volumes" Feb 18 00:52:33 crc kubenswrapper[4847]: I0218 00:52:33.319826 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cf7b6cbf7-zktfb" Feb 18 00:52:33 crc kubenswrapper[4847]: E0218 00:52:33.407905 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:33 crc kubenswrapper[4847]: I0218 00:52:33.439791 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:33 crc kubenswrapper[4847]: I0218 00:52:33.440414 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="dnsmasq-dns" containerID="cri-o://088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844" gracePeriod=10 Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.060699 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.116635 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.116713 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kls46\" (UniqueName: \"kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.116809 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.116981 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.117020 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.117044 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.117089 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb\") pod \"76e08d7c-5d07-441c-89fc-c36a361b3086\" (UID: \"76e08d7c-5d07-441c-89fc-c36a361b3086\") " Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.146761 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46" (OuterVolumeSpecName: "kube-api-access-kls46") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "kube-api-access-kls46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.219097 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kls46\" (UniqueName: \"kubernetes.io/projected/76e08d7c-5d07-441c-89fc-c36a361b3086-kube-api-access-kls46\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.297765 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config" (OuterVolumeSpecName: "config") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.301398 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.313678 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.318369 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.320300 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.321540 4847 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.321562 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.321572 4847 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.321583 4847 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.321593 4847 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.324004 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76e08d7c-5d07-441c-89fc-c36a361b3086" (UID: "76e08d7c-5d07-441c-89fc-c36a361b3086"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.421735 4847 generic.go:334] "Generic (PLEG): container finished" podID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerID="088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844" exitCode=0 Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.421783 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" event={"ID":"76e08d7c-5d07-441c-89fc-c36a361b3086","Type":"ContainerDied","Data":"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844"} Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.421896 4847 scope.go:117] "RemoveContainer" containerID="088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.421812 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" event={"ID":"76e08d7c-5d07-441c-89fc-c36a361b3086","Type":"ContainerDied","Data":"8e185870fc5b2d6a5a3dd6d194bd44ef7b824d4c3ece85747a71ecbd8263695e"} Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.422344 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-25phr" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.423369 4847 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76e08d7c-5d07-441c-89fc-c36a361b3086-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.448807 4847 scope.go:117] "RemoveContainer" containerID="d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.458150 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.468799 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-25phr"] Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.479581 4847 scope.go:117] "RemoveContainer" containerID="088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.480005 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844\": container with ID starting with 088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844 not found: ID does not exist" containerID="088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.480058 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844"} err="failed to get container status \"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844\": rpc error: code = NotFound desc = could not find container \"088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844\": container with ID starting with 088e9b3d112574aaabb3afcead4ca6d4b397cbf5074ea9ceb2af51cc61659844 not found: ID does not exist" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.480090 4847 scope.go:117] "RemoveContainer" containerID="d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.480542 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44\": container with ID starting with d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44 not found: ID does not exist" containerID="d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44" Feb 18 00:52:34 crc kubenswrapper[4847]: I0218 00:52:34.480588 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44"} err="failed to get container status \"d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44\": rpc error: code = NotFound desc = could not find container \"d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44\": container with ID starting with d6d10ea84e2fb17b0fdd05eadfe0a1bf2c5e17a34d4e48890fe3a41a48686d44 not found: ID does not exist" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.504078 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.504135 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.504282 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:52:34 crc kubenswrapper[4847]: E0218 00:52:34.505533 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:52:35 crc kubenswrapper[4847]: I0218 00:52:35.427327 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" path="/var/lib/kubelet/pods/76e08d7c-5d07-441c-89fc-c36a361b3086/volumes" Feb 18 00:52:37 crc kubenswrapper[4847]: I0218 00:52:37.413902 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:52:37 crc kubenswrapper[4847]: E0218 00:52:37.414832 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:52:44 crc kubenswrapper[4847]: I0218 00:52:44.565806 4847 generic.go:334] "Generic (PLEG): container finished" podID="b19ac705-a85b-44ee-86c9-c31b23d988c0" containerID="e159c591088b8e5de443a7fa65ed305bd243ebd544a43974358150457777be87" exitCode=0 Feb 18 00:52:44 crc kubenswrapper[4847]: I0218 00:52:44.565914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b19ac705-a85b-44ee-86c9-c31b23d988c0","Type":"ContainerDied","Data":"e159c591088b8e5de443a7fa65ed305bd243ebd544a43974358150457777be87"} Feb 18 00:52:45 crc kubenswrapper[4847]: E0218 00:52:45.521742 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:45 crc kubenswrapper[4847]: E0218 00:52:45.522129 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:52:45 crc kubenswrapper[4847]: E0218 00:52:45.522362 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:52:45 crc kubenswrapper[4847]: E0218 00:52:45.523562 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:45 crc kubenswrapper[4847]: I0218 00:52:45.578962 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b19ac705-a85b-44ee-86c9-c31b23d988c0","Type":"ContainerStarted","Data":"6752a4454202f36d034315df942fc027a38b9bacdc1aba205c5cf3738dd922ec"} Feb 18 00:52:45 crc kubenswrapper[4847]: I0218 00:52:45.579222 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 18 00:52:45 crc kubenswrapper[4847]: I0218 00:52:45.581276 4847 generic.go:334] "Generic (PLEG): container finished" podID="fac01d88-c41a-44cd-97e2-34d58a619ba1" containerID="d4412d00d5764a1db6af3f5a08f087732e25a34d5c6dacc2be2467d13a403dbc" exitCode=0 Feb 18 00:52:45 crc kubenswrapper[4847]: I0218 00:52:45.581300 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fac01d88-c41a-44cd-97e2-34d58a619ba1","Type":"ContainerDied","Data":"d4412d00d5764a1db6af3f5a08f087732e25a34d5c6dacc2be2467d13a403dbc"} Feb 18 00:52:45 crc kubenswrapper[4847]: I0218 00:52:45.618114 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.618099812 podStartE2EDuration="36.618099812s" podCreationTimestamp="2026-02-18 00:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:45.61637292 +0000 UTC m=+1638.993723882" watchObservedRunningTime="2026-02-18 00:52:45.618099812 +0000 UTC m=+1638.995450754" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.492354 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt"] Feb 18 00:52:46 crc kubenswrapper[4847]: E0218 00:52:46.493084 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493101 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: E0218 00:52:46.493126 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493132 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: E0218 00:52:46.493147 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="init" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493153 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="init" Feb 18 00:52:46 crc kubenswrapper[4847]: E0218 00:52:46.493168 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="init" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493174 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="init" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493364 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c8be14a-5fbf-40ea-aa45-ea6b6474f281" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.493372 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e08d7c-5d07-441c-89fc-c36a361b3086" containerName="dnsmasq-dns" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.494105 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.497291 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.497305 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.499015 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.499980 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.513949 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt"] Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.593017 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fac01d88-c41a-44cd-97e2-34d58a619ba1","Type":"ContainerStarted","Data":"687cbdabd4687ab147583c592d6e4809074947398315eb1d4e10031a470df194"} Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.593439 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.616529 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.616509441 podStartE2EDuration="36.616509441s" podCreationTimestamp="2026-02-18 00:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:52:46.610922192 +0000 UTC m=+1639.988273154" watchObservedRunningTime="2026-02-18 00:52:46.616509441 +0000 UTC m=+1639.993860383" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.618502 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.618577 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgm7\" (UniqueName: \"kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.618862 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.618985 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.732808 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.733140 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgm7\" (UniqueName: \"kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.733496 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.733697 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.739903 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.746680 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.767054 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.772546 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgm7\" (UniqueName: \"kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:46 crc kubenswrapper[4847]: I0218 00:52:46.813621 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:52:47 crc kubenswrapper[4847]: E0218 00:52:47.424927 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:52:47 crc kubenswrapper[4847]: I0218 00:52:47.494387 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt"] Feb 18 00:52:47 crc kubenswrapper[4847]: I0218 00:52:47.613701 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" event={"ID":"2a653876-94ca-4328-825b-abca7b86ea33","Type":"ContainerStarted","Data":"ad22892d66a07bcc7b71685b94c6abc55a25302512a21e8baa17dcebab6c9f82"} Feb 18 00:52:52 crc kubenswrapper[4847]: I0218 00:52:52.404294 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:52:52 crc kubenswrapper[4847]: E0218 00:52:52.405072 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:52:56 crc kubenswrapper[4847]: E0218 00:52:56.408094 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:52:58 crc kubenswrapper[4847]: I0218 00:52:58.768321 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" event={"ID":"2a653876-94ca-4328-825b-abca7b86ea33","Type":"ContainerStarted","Data":"1f253c514f2a7061ba95391052760c59d6df07d684b4bdc06c7c08363cc83ed8"} Feb 18 00:52:58 crc kubenswrapper[4847]: I0218 00:52:58.802662 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" podStartSLOduration=1.851448845 podStartE2EDuration="12.802626619s" podCreationTimestamp="2026-02-18 00:52:46 +0000 UTC" firstStartedPulling="2026-02-18 00:52:47.473877647 +0000 UTC m=+1640.851228609" lastFinishedPulling="2026-02-18 00:52:58.425055441 +0000 UTC m=+1651.802406383" observedRunningTime="2026-02-18 00:52:58.789870462 +0000 UTC m=+1652.167221414" watchObservedRunningTime="2026-02-18 00:52:58.802626619 +0000 UTC m=+1652.179977591" Feb 18 00:52:59 crc kubenswrapper[4847]: E0218 00:52:59.408621 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:52:59 crc kubenswrapper[4847]: I0218 00:52:59.915870 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 18 00:53:00 crc kubenswrapper[4847]: I0218 00:53:00.801879 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 18 00:53:05 crc kubenswrapper[4847]: I0218 00:53:05.404119 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:53:05 crc kubenswrapper[4847]: E0218 00:53:05.405794 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:53:08 crc kubenswrapper[4847]: E0218 00:53:08.406862 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:53:10 crc kubenswrapper[4847]: I0218 00:53:10.945172 4847 generic.go:334] "Generic (PLEG): container finished" podID="2a653876-94ca-4328-825b-abca7b86ea33" containerID="1f253c514f2a7061ba95391052760c59d6df07d684b4bdc06c7c08363cc83ed8" exitCode=0 Feb 18 00:53:10 crc kubenswrapper[4847]: I0218 00:53:10.945286 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" event={"ID":"2a653876-94ca-4328-825b-abca7b86ea33","Type":"ContainerDied","Data":"1f253c514f2a7061ba95391052760c59d6df07d684b4bdc06c7c08363cc83ed8"} Feb 18 00:53:11 crc kubenswrapper[4847]: E0218 00:53:11.407474 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.580430 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.700146 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam\") pod \"2a653876-94ca-4328-825b-abca7b86ea33\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.700273 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory\") pod \"2a653876-94ca-4328-825b-abca7b86ea33\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.700297 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpgm7\" (UniqueName: \"kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7\") pod \"2a653876-94ca-4328-825b-abca7b86ea33\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.701300 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle\") pod \"2a653876-94ca-4328-825b-abca7b86ea33\" (UID: \"2a653876-94ca-4328-825b-abca7b86ea33\") " Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.707755 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7" (OuterVolumeSpecName: "kube-api-access-hpgm7") pod "2a653876-94ca-4328-825b-abca7b86ea33" (UID: "2a653876-94ca-4328-825b-abca7b86ea33"). InnerVolumeSpecName "kube-api-access-hpgm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.708030 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2a653876-94ca-4328-825b-abca7b86ea33" (UID: "2a653876-94ca-4328-825b-abca7b86ea33"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.732821 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2a653876-94ca-4328-825b-abca7b86ea33" (UID: "2a653876-94ca-4328-825b-abca7b86ea33"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.740813 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory" (OuterVolumeSpecName: "inventory") pod "2a653876-94ca-4328-825b-abca7b86ea33" (UID: "2a653876-94ca-4328-825b-abca7b86ea33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.804392 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.804736 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.804750 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpgm7\" (UniqueName: \"kubernetes.io/projected/2a653876-94ca-4328-825b-abca7b86ea33-kube-api-access-hpgm7\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.804763 4847 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a653876-94ca-4328-825b-abca7b86ea33-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.993914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" event={"ID":"2a653876-94ca-4328-825b-abca7b86ea33","Type":"ContainerDied","Data":"ad22892d66a07bcc7b71685b94c6abc55a25302512a21e8baa17dcebab6c9f82"} Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.993983 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad22892d66a07bcc7b71685b94c6abc55a25302512a21e8baa17dcebab6c9f82" Feb 18 00:53:12 crc kubenswrapper[4847]: I0218 00:53:12.994004 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.077129 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl"] Feb 18 00:53:13 crc kubenswrapper[4847]: E0218 00:53:13.078160 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a653876-94ca-4328-825b-abca7b86ea33" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.078191 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a653876-94ca-4328-825b-abca7b86ea33" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.078487 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a653876-94ca-4328-825b-abca7b86ea33" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.079486 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.082202 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.082358 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.082369 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.082540 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.089062 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl"] Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.220393 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.220816 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46q89\" (UniqueName: \"kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.221012 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.221116 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.323697 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46q89\" (UniqueName: \"kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.323753 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.323782 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.323865 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.328733 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.345780 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.348359 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46q89\" (UniqueName: \"kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.349445 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:13 crc kubenswrapper[4847]: I0218 00:53:13.404238 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:53:14 crc kubenswrapper[4847]: I0218 00:53:14.005798 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl"] Feb 18 00:53:15 crc kubenswrapper[4847]: I0218 00:53:15.019693 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" event={"ID":"7c608e56-c3b4-4a23-ac5e-2994862ffea6","Type":"ContainerStarted","Data":"ac5d8d86dcb1d42039646f04926ffafc182b434ca62e086128c99e7dd7990957"} Feb 18 00:53:15 crc kubenswrapper[4847]: I0218 00:53:15.020348 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" event={"ID":"7c608e56-c3b4-4a23-ac5e-2994862ffea6","Type":"ContainerStarted","Data":"b64d7a2d5e67422a05de4f6ec9a71f43b19fd0eb205e11a79b248718862dd192"} Feb 18 00:53:15 crc kubenswrapper[4847]: I0218 00:53:15.058318 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" podStartSLOduration=1.614603556 podStartE2EDuration="2.058293196s" podCreationTimestamp="2026-02-18 00:53:13 +0000 UTC" firstStartedPulling="2026-02-18 00:53:14.002364479 +0000 UTC m=+1667.379715461" lastFinishedPulling="2026-02-18 00:53:14.446054119 +0000 UTC m=+1667.823405101" observedRunningTime="2026-02-18 00:53:15.04516509 +0000 UTC m=+1668.422516062" watchObservedRunningTime="2026-02-18 00:53:15.058293196 +0000 UTC m=+1668.435644148" Feb 18 00:53:20 crc kubenswrapper[4847]: I0218 00:53:20.405262 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:53:20 crc kubenswrapper[4847]: E0218 00:53:20.407697 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:53:20 crc kubenswrapper[4847]: E0218 00:53:20.408806 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:53:24 crc kubenswrapper[4847]: E0218 00:53:24.501203 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:53:24 crc kubenswrapper[4847]: E0218 00:53:24.502112 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:53:24 crc kubenswrapper[4847]: E0218 00:53:24.502401 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:53:24 crc kubenswrapper[4847]: E0218 00:53:24.503548 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:53:30 crc kubenswrapper[4847]: I0218 00:53:30.374235 4847 scope.go:117] "RemoveContainer" containerID="fd6fa9869acc2ce9ffe611456f74bb577a74eecdb72c6b050783772f0a9b92fe" Feb 18 00:53:30 crc kubenswrapper[4847]: I0218 00:53:30.417900 4847 scope.go:117] "RemoveContainer" containerID="6be4f2fc66f2c95cd71292cb373c9db0c755a4cbcb5ca698963e6c82a8ceb663" Feb 18 00:53:30 crc kubenswrapper[4847]: I0218 00:53:30.484237 4847 scope.go:117] "RemoveContainer" containerID="c969932639a6afbb90efda97d2de65bcf1c1bf97985356a720ae9cc66837c67d" Feb 18 00:53:31 crc kubenswrapper[4847]: I0218 00:53:31.406177 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:53:31 crc kubenswrapper[4847]: E0218 00:53:31.407383 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:53:32 crc kubenswrapper[4847]: E0218 00:53:32.534701 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:53:32 crc kubenswrapper[4847]: E0218 00:53:32.535233 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:53:32 crc kubenswrapper[4847]: E0218 00:53:32.535480 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:53:32 crc kubenswrapper[4847]: E0218 00:53:32.536911 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:53:39 crc kubenswrapper[4847]: E0218 00:53:39.408203 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:53:42 crc kubenswrapper[4847]: I0218 00:53:42.404213 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:53:42 crc kubenswrapper[4847]: E0218 00:53:42.405058 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:53:45 crc kubenswrapper[4847]: E0218 00:53:45.409211 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:53:53 crc kubenswrapper[4847]: E0218 00:53:53.407017 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:53:57 crc kubenswrapper[4847]: I0218 00:53:57.413932 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:53:57 crc kubenswrapper[4847]: E0218 00:53:57.414987 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:54:00 crc kubenswrapper[4847]: E0218 00:54:00.409808 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:54:05 crc kubenswrapper[4847]: E0218 00:54:05.410829 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:54:08 crc kubenswrapper[4847]: I0218 00:54:08.404652 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:54:08 crc kubenswrapper[4847]: E0218 00:54:08.405481 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:54:11 crc kubenswrapper[4847]: E0218 00:54:11.407193 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:54:18 crc kubenswrapper[4847]: E0218 00:54:18.408863 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:54:22 crc kubenswrapper[4847]: I0218 00:54:22.405163 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:54:22 crc kubenswrapper[4847]: E0218 00:54:22.406295 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:54:23 crc kubenswrapper[4847]: E0218 00:54:23.409996 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:54:30 crc kubenswrapper[4847]: I0218 00:54:30.674433 4847 scope.go:117] "RemoveContainer" containerID="6de6202f5b0ab30ac8647ef1d1bb24fdd7af4dd99fa43095f095db6b1682ecd2" Feb 18 00:54:30 crc kubenswrapper[4847]: I0218 00:54:30.730868 4847 scope.go:117] "RemoveContainer" containerID="30e074840237a94349e6e93cf790a01ac09f029dbf7c13d41eb502886bd027cf" Feb 18 00:54:33 crc kubenswrapper[4847]: E0218 00:54:33.410276 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:54:34 crc kubenswrapper[4847]: E0218 00:54:34.409393 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:54:36 crc kubenswrapper[4847]: I0218 00:54:36.405904 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:54:36 crc kubenswrapper[4847]: E0218 00:54:36.408265 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:54:47 crc kubenswrapper[4847]: I0218 00:54:47.415280 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:54:47 crc kubenswrapper[4847]: E0218 00:54:47.416417 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:54:47 crc kubenswrapper[4847]: E0218 00:54:47.517484 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:54:47 crc kubenswrapper[4847]: E0218 00:54:47.517598 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:54:47 crc kubenswrapper[4847]: E0218 00:54:47.517806 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:54:47 crc kubenswrapper[4847]: E0218 00:54:47.519050 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:54:48 crc kubenswrapper[4847]: E0218 00:54:48.407324 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:54:58 crc kubenswrapper[4847]: E0218 00:54:58.407138 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:55:00 crc kubenswrapper[4847]: E0218 00:55:00.557034 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:55:00 crc kubenswrapper[4847]: E0218 00:55:00.557461 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:55:00 crc kubenswrapper[4847]: E0218 00:55:00.557648 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:55:00 crc kubenswrapper[4847]: E0218 00:55:00.558897 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:55:01 crc kubenswrapper[4847]: I0218 00:55:01.405262 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:55:01 crc kubenswrapper[4847]: E0218 00:55:01.405909 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:55:10 crc kubenswrapper[4847]: E0218 00:55:10.408406 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:55:11 crc kubenswrapper[4847]: E0218 00:55:11.408016 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:55:15 crc kubenswrapper[4847]: I0218 00:55:15.405129 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:55:15 crc kubenswrapper[4847]: E0218 00:55:15.406378 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 00:55:22 crc kubenswrapper[4847]: E0218 00:55:22.406813 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:55:24 crc kubenswrapper[4847]: E0218 00:55:24.408114 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:55:26 crc kubenswrapper[4847]: I0218 00:55:26.405256 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:55:26 crc kubenswrapper[4847]: I0218 00:55:26.936542 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769"} Feb 18 00:55:30 crc kubenswrapper[4847]: I0218 00:55:30.873054 4847 scope.go:117] "RemoveContainer" containerID="907bac399e438cfe5a24a2d99de0d7cd1b40908df749631f1f3ab4baff3f4744" Feb 18 00:55:33 crc kubenswrapper[4847]: E0218 00:55:33.408140 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:55:35 crc kubenswrapper[4847]: E0218 00:55:35.409350 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:55:48 crc kubenswrapper[4847]: E0218 00:55:48.407253 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:55:49 crc kubenswrapper[4847]: E0218 00:55:49.409054 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:56:00 crc kubenswrapper[4847]: E0218 00:56:00.418902 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:56:02 crc kubenswrapper[4847]: E0218 00:56:02.407039 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:56:09 crc kubenswrapper[4847]: I0218 00:56:09.544885 4847 generic.go:334] "Generic (PLEG): container finished" podID="7c608e56-c3b4-4a23-ac5e-2994862ffea6" containerID="ac5d8d86dcb1d42039646f04926ffafc182b434ca62e086128c99e7dd7990957" exitCode=0 Feb 18 00:56:09 crc kubenswrapper[4847]: I0218 00:56:09.545005 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" event={"ID":"7c608e56-c3b4-4a23-ac5e-2994862ffea6","Type":"ContainerDied","Data":"ac5d8d86dcb1d42039646f04926ffafc182b434ca62e086128c99e7dd7990957"} Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.249455 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.340259 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam\") pod \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.340343 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46q89\" (UniqueName: \"kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89\") pod \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.340450 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory\") pod \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.340709 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle\") pod \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\" (UID: \"7c608e56-c3b4-4a23-ac5e-2994862ffea6\") " Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.349531 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89" (OuterVolumeSpecName: "kube-api-access-46q89") pod "7c608e56-c3b4-4a23-ac5e-2994862ffea6" (UID: "7c608e56-c3b4-4a23-ac5e-2994862ffea6"). InnerVolumeSpecName "kube-api-access-46q89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.350909 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7c608e56-c3b4-4a23-ac5e-2994862ffea6" (UID: "7c608e56-c3b4-4a23-ac5e-2994862ffea6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.394870 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7c608e56-c3b4-4a23-ac5e-2994862ffea6" (UID: "7c608e56-c3b4-4a23-ac5e-2994862ffea6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.398862 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory" (OuterVolumeSpecName: "inventory") pod "7c608e56-c3b4-4a23-ac5e-2994862ffea6" (UID: "7c608e56-c3b4-4a23-ac5e-2994862ffea6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.444377 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.444418 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46q89\" (UniqueName: \"kubernetes.io/projected/7c608e56-c3b4-4a23-ac5e-2994862ffea6-kube-api-access-46q89\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.444437 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.444455 4847 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c608e56-c3b4-4a23-ac5e-2994862ffea6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.578527 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" event={"ID":"7c608e56-c3b4-4a23-ac5e-2994862ffea6","Type":"ContainerDied","Data":"b64d7a2d5e67422a05de4f6ec9a71f43b19fd0eb205e11a79b248718862dd192"} Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.578597 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b64d7a2d5e67422a05de4f6ec9a71f43b19fd0eb205e11a79b248718862dd192" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.578649 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.711632 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z"] Feb 18 00:56:11 crc kubenswrapper[4847]: E0218 00:56:11.712639 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c608e56-c3b4-4a23-ac5e-2994862ffea6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.712685 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c608e56-c3b4-4a23-ac5e-2994862ffea6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.713103 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c608e56-c3b4-4a23-ac5e-2994862ffea6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.714393 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.719047 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.719467 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.719627 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.722778 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.726175 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z"] Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.752025 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5jpb\" (UniqueName: \"kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.752107 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.752208 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.855186 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5jpb\" (UniqueName: \"kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.855783 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.856166 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.861854 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.872335 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:11 crc kubenswrapper[4847]: I0218 00:56:11.878621 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5jpb\" (UniqueName: \"kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:12 crc kubenswrapper[4847]: I0218 00:56:12.048894 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:56:12 crc kubenswrapper[4847]: I0218 00:56:12.729315 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z"] Feb 18 00:56:12 crc kubenswrapper[4847]: W0218 00:56:12.731826 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod595b7464_bb09_48f6_ae94_96bc8ed4cd16.slice/crio-e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee WatchSource:0}: Error finding container e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee: Status 404 returned error can't find the container with id e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee Feb 18 00:56:13 crc kubenswrapper[4847]: I0218 00:56:13.608889 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" event={"ID":"595b7464-bb09-48f6-ae94-96bc8ed4cd16","Type":"ContainerStarted","Data":"0940c472007042c1a579572638b71ce766a7e323ab4f832a80e8ed9ad0aa61a1"} Feb 18 00:56:13 crc kubenswrapper[4847]: I0218 00:56:13.609449 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" event={"ID":"595b7464-bb09-48f6-ae94-96bc8ed4cd16","Type":"ContainerStarted","Data":"e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee"} Feb 18 00:56:13 crc kubenswrapper[4847]: I0218 00:56:13.655028 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" podStartSLOduration=2.211085948 podStartE2EDuration="2.654946373s" podCreationTimestamp="2026-02-18 00:56:11 +0000 UTC" firstStartedPulling="2026-02-18 00:56:12.73551312 +0000 UTC m=+1846.112864072" lastFinishedPulling="2026-02-18 00:56:13.179373545 +0000 UTC m=+1846.556724497" observedRunningTime="2026-02-18 00:56:13.638553669 +0000 UTC m=+1847.015904621" watchObservedRunningTime="2026-02-18 00:56:13.654946373 +0000 UTC m=+1847.032297355" Feb 18 00:56:14 crc kubenswrapper[4847]: E0218 00:56:14.408129 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:56:15 crc kubenswrapper[4847]: E0218 00:56:15.406634 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:56:27 crc kubenswrapper[4847]: E0218 00:56:27.447486 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:56:28 crc kubenswrapper[4847]: E0218 00:56:28.409008 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:56:30 crc kubenswrapper[4847]: I0218 00:56:30.969540 4847 scope.go:117] "RemoveContainer" containerID="8cbbea99e9e673e1548ce862f3993b8cfea9bf43ae00947a4fcc8f6bc37891ba" Feb 18 00:56:31 crc kubenswrapper[4847]: I0218 00:56:31.014016 4847 scope.go:117] "RemoveContainer" containerID="2fda87e2d268beee4b519656d4738f502c2059cc5df0e971986a493d52ab56c2" Feb 18 00:56:40 crc kubenswrapper[4847]: E0218 00:56:40.406997 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:56:40 crc kubenswrapper[4847]: E0218 00:56:40.407655 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:56:53 crc kubenswrapper[4847]: E0218 00:56:53.407184 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:56:53 crc kubenswrapper[4847]: E0218 00:56:53.407273 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:57:04 crc kubenswrapper[4847]: E0218 00:57:04.407523 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:57:06 crc kubenswrapper[4847]: E0218 00:57:06.406618 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.110285 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wmjzw"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.124184 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7fd1-account-create-update-5t9jf"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.133759 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h4rtr"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.147145 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qffjj"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.156587 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-a882-account-create-update-zprw4"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.166731 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wmjzw"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.184143 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7fd1-account-create-update-5t9jf"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.199493 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-h4rtr"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.209004 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qffjj"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.226936 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-a882-account-create-update-zprw4"] Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.419162 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9d5176-c1d3-4862-aa1d-4b0c5c412d48" path="/var/lib/kubelet/pods/1b9d5176-c1d3-4862-aa1d-4b0c5c412d48/volumes" Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.420058 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd945e5-694d-47a5-9817-8e6cff5a1c8b" path="/var/lib/kubelet/pods/4dd945e5-694d-47a5-9817-8e6cff5a1c8b/volumes" Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.420858 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899af78e-0f52-4b70-8817-47ea4fe4d344" path="/var/lib/kubelet/pods/899af78e-0f52-4b70-8817-47ea4fe4d344/volumes" Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.421651 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6d8310c-c9e5-49cc-bc20-af6aacf1487d" path="/var/lib/kubelet/pods/d6d8310c-c9e5-49cc-bc20-af6aacf1487d/volumes" Feb 18 00:57:13 crc kubenswrapper[4847]: I0218 00:57:13.422997 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd4a05d-c720-48e0-9ef0-1101d4ee0a17" path="/var/lib/kubelet/pods/ecd4a05d-c720-48e0-9ef0-1101d4ee0a17/volumes" Feb 18 00:57:16 crc kubenswrapper[4847]: E0218 00:57:16.409580 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.048381 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a47c-account-create-update-fcsjz"] Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.088784 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1377-account-create-update-76bd5"] Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.114867 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-8qtg4"] Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.125298 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1377-account-create-update-76bd5"] Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.137753 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a47c-account-create-update-fcsjz"] Feb 18 00:57:18 crc kubenswrapper[4847]: I0218 00:57:18.153015 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-8qtg4"] Feb 18 00:57:19 crc kubenswrapper[4847]: I0218 00:57:19.422193 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4256ea1b-a495-4301-b5b9-1e376c78852e" path="/var/lib/kubelet/pods/4256ea1b-a495-4301-b5b9-1e376c78852e/volumes" Feb 18 00:57:19 crc kubenswrapper[4847]: I0218 00:57:19.424287 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a17f384-68bb-4ce1-be12-102c477b5968" path="/var/lib/kubelet/pods/9a17f384-68bb-4ce1-be12-102c477b5968/volumes" Feb 18 00:57:19 crc kubenswrapper[4847]: I0218 00:57:19.425646 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb5a063-20d2-4791-a141-7e87555bc17d" path="/var/lib/kubelet/pods/cfb5a063-20d2-4791-a141-7e87555bc17d/volumes" Feb 18 00:57:21 crc kubenswrapper[4847]: E0218 00:57:21.412199 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.043826 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7"] Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.060570 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-e8b0-account-create-update-qnp69"] Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.072347 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-wjfr7"] Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.086500 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-e8b0-account-create-update-qnp69"] Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.428114 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815709d3-9a7d-4e0e-a44e-a60ad1428919" path="/var/lib/kubelet/pods/815709d3-9a7d-4e0e-a44e-a60ad1428919/volumes" Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.429527 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79eaa76-7d45-436b-a23f-157ce98678ba" path="/var/lib/kubelet/pods/e79eaa76-7d45-436b-a23f-157ce98678ba/volumes" Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.608369 4847 generic.go:334] "Generic (PLEG): container finished" podID="595b7464-bb09-48f6-ae94-96bc8ed4cd16" containerID="0940c472007042c1a579572638b71ce766a7e323ab4f832a80e8ed9ad0aa61a1" exitCode=0 Feb 18 00:57:23 crc kubenswrapper[4847]: I0218 00:57:23.608547 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" event={"ID":"595b7464-bb09-48f6-ae94-96bc8ed4cd16","Type":"ContainerDied","Data":"0940c472007042c1a579572638b71ce766a7e323ab4f832a80e8ed9ad0aa61a1"} Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.269293 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.402936 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory\") pod \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.403062 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5jpb\" (UniqueName: \"kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb\") pod \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.403182 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam\") pod \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\" (UID: \"595b7464-bb09-48f6-ae94-96bc8ed4cd16\") " Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.414512 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb" (OuterVolumeSpecName: "kube-api-access-c5jpb") pod "595b7464-bb09-48f6-ae94-96bc8ed4cd16" (UID: "595b7464-bb09-48f6-ae94-96bc8ed4cd16"). InnerVolumeSpecName "kube-api-access-c5jpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.443880 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory" (OuterVolumeSpecName: "inventory") pod "595b7464-bb09-48f6-ae94-96bc8ed4cd16" (UID: "595b7464-bb09-48f6-ae94-96bc8ed4cd16"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.454732 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "595b7464-bb09-48f6-ae94-96bc8ed4cd16" (UID: "595b7464-bb09-48f6-ae94-96bc8ed4cd16"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.505797 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.505837 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/595b7464-bb09-48f6-ae94-96bc8ed4cd16-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.505847 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5jpb\" (UniqueName: \"kubernetes.io/projected/595b7464-bb09-48f6-ae94-96bc8ed4cd16-kube-api-access-c5jpb\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.636883 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" event={"ID":"595b7464-bb09-48f6-ae94-96bc8ed4cd16","Type":"ContainerDied","Data":"e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee"} Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.637345 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ad822b2f09a6fbf0bcd531a9e5c8c3193d11faa3861576005062cccfb896ee" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.636988 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.752862 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s"] Feb 18 00:57:25 crc kubenswrapper[4847]: E0218 00:57:25.753392 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="595b7464-bb09-48f6-ae94-96bc8ed4cd16" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.753417 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="595b7464-bb09-48f6-ae94-96bc8ed4cd16" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.753697 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="595b7464-bb09-48f6-ae94-96bc8ed4cd16" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.754548 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.758795 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.759050 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.759098 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.759134 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.772271 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s"] Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.823320 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.823410 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.823474 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7rg\" (UniqueName: \"kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.925831 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.925981 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.926107 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7rg\" (UniqueName: \"kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.931141 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.931786 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:25 crc kubenswrapper[4847]: I0218 00:57:25.945113 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7rg\" (UniqueName: \"kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:26 crc kubenswrapper[4847]: I0218 00:57:26.090328 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:26 crc kubenswrapper[4847]: I0218 00:57:26.767234 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s"] Feb 18 00:57:26 crc kubenswrapper[4847]: W0218 00:57:26.775873 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c781aeb_a3ac_4a08_a055_ed2846466b8b.slice/crio-313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f WatchSource:0}: Error finding container 313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f: Status 404 returned error can't find the container with id 313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f Feb 18 00:57:26 crc kubenswrapper[4847]: I0218 00:57:26.779796 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:57:27 crc kubenswrapper[4847]: I0218 00:57:27.664744 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" event={"ID":"6c781aeb-a3ac-4a08-a055-ed2846466b8b","Type":"ContainerStarted","Data":"370c39b0e25c48015857d784b4727f2e729c0747665c6dfc2c4eaafb6a8847da"} Feb 18 00:57:27 crc kubenswrapper[4847]: I0218 00:57:27.665218 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" event={"ID":"6c781aeb-a3ac-4a08-a055-ed2846466b8b","Type":"ContainerStarted","Data":"313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f"} Feb 18 00:57:27 crc kubenswrapper[4847]: I0218 00:57:27.697159 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" podStartSLOduration=2.199670657 podStartE2EDuration="2.697133309s" podCreationTimestamp="2026-02-18 00:57:25 +0000 UTC" firstStartedPulling="2026-02-18 00:57:26.779332137 +0000 UTC m=+1920.156683109" lastFinishedPulling="2026-02-18 00:57:27.276794809 +0000 UTC m=+1920.654145761" observedRunningTime="2026-02-18 00:57:27.681169386 +0000 UTC m=+1921.058520358" watchObservedRunningTime="2026-02-18 00:57:27.697133309 +0000 UTC m=+1921.074484251" Feb 18 00:57:29 crc kubenswrapper[4847]: E0218 00:57:29.556221 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:57:29 crc kubenswrapper[4847]: E0218 00:57:29.556858 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 00:57:29 crc kubenswrapper[4847]: E0218 00:57:29.557024 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:29 crc kubenswrapper[4847]: E0218 00:57:29.558208 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.109941 4847 scope.go:117] "RemoveContainer" containerID="6902ccc554a3be0078859f47e8ab23be9b6781629de6ebe8ca99db5aa763a2ea" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.150265 4847 scope.go:117] "RemoveContainer" containerID="8f9a33a665621e44dc52192e579be159d8e9593d68e6567ffbf97c1e6d9cc0e3" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.235972 4847 scope.go:117] "RemoveContainer" containerID="c188b0d181ea7fa661b29d20be5bee627936357fcb13f90477e11b5fbe6f1bfa" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.292699 4847 scope.go:117] "RemoveContainer" containerID="bec2c0fe75fb488f5c4425186ceb2944d2c2f4b0024288d2a418007b0a5a5b16" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.350630 4847 scope.go:117] "RemoveContainer" containerID="90c199098c0b02ac71b902575c68f3c65279767844ac84093136fda9f2f10b27" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.411513 4847 scope.go:117] "RemoveContainer" containerID="3a942e42f549a01a2027b5b1a3435bbe79f3c127a3d1d79fbff2faa2e1123641" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.461898 4847 scope.go:117] "RemoveContainer" containerID="3ea2820168a8de51c02a4f24b4add952ccb5457d1e7772e6e8c533a559ebc60b" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.498710 4847 scope.go:117] "RemoveContainer" containerID="64911e2822c78a88847f42b182b440fbdaa8d33c3796b8e3dfea2abd7889134a" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.537437 4847 scope.go:117] "RemoveContainer" containerID="d348f31e549a45daa9b07e9273e5b941de4bacfc801b3d6868e71d7edeffa6af" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.560993 4847 scope.go:117] "RemoveContainer" containerID="a953a1d8b5fad80f99851cb6e362a10662d6f7e5c265d5247a60e999b411950a" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.589724 4847 scope.go:117] "RemoveContainer" containerID="15ff6b6247fda6a53fca1b34fde94429cee51a8e745e73c5d5eb6a24d54348f0" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.614269 4847 scope.go:117] "RemoveContainer" containerID="bf16dec9289475e1b58924b3fcb59776a5f8705c970fce17296c1e5a82cdd2c5" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.641848 4847 scope.go:117] "RemoveContainer" containerID="7efde83610347c0d025e46c8e7d680a2ae94fbb154e2d435215c941d02a50b88" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.671607 4847 scope.go:117] "RemoveContainer" containerID="894b3c076fbf9485d22494889a53c3916264c4b03f348b1db79bc06735194bd9" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.694483 4847 scope.go:117] "RemoveContainer" containerID="e2e70a0142a8388468f65caf6ae465301e9e17d4f5a9265048860af00a955451" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.731828 4847 scope.go:117] "RemoveContainer" containerID="3388dd244b26737d6be8a2c57d074c37cac5aaa2d3dcedeaa5f3518a164dee09" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.762712 4847 scope.go:117] "RemoveContainer" containerID="117299d45a910586ea273e72c6ca0bf14bec6a6bb258d63ed2fd5d97eee2cba3" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.788962 4847 scope.go:117] "RemoveContainer" containerID="9c0a07a32d365b4171acf280d59d30881e56eff9dd6d1ad4ce14824033012b71" Feb 18 00:57:31 crc kubenswrapper[4847]: I0218 00:57:31.810676 4847 scope.go:117] "RemoveContainer" containerID="3762d3bdfb43664204c4ac87da22ae93f428cd087f160ef6a0509417461d225f" Feb 18 00:57:32 crc kubenswrapper[4847]: I0218 00:57:32.779043 4847 generic.go:334] "Generic (PLEG): container finished" podID="6c781aeb-a3ac-4a08-a055-ed2846466b8b" containerID="370c39b0e25c48015857d784b4727f2e729c0747665c6dfc2c4eaafb6a8847da" exitCode=0 Feb 18 00:57:32 crc kubenswrapper[4847]: I0218 00:57:32.779108 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" event={"ID":"6c781aeb-a3ac-4a08-a055-ed2846466b8b","Type":"ContainerDied","Data":"370c39b0e25c48015857d784b4727f2e729c0747665c6dfc2c4eaafb6a8847da"} Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.357425 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.418703 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory\") pod \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.418817 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz7rg\" (UniqueName: \"kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg\") pod \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.418948 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam\") pod \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\" (UID: \"6c781aeb-a3ac-4a08-a055-ed2846466b8b\") " Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.428809 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg" (OuterVolumeSpecName: "kube-api-access-nz7rg") pod "6c781aeb-a3ac-4a08-a055-ed2846466b8b" (UID: "6c781aeb-a3ac-4a08-a055-ed2846466b8b"). InnerVolumeSpecName "kube-api-access-nz7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.449186 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory" (OuterVolumeSpecName: "inventory") pod "6c781aeb-a3ac-4a08-a055-ed2846466b8b" (UID: "6c781aeb-a3ac-4a08-a055-ed2846466b8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.465902 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c781aeb-a3ac-4a08-a055-ed2846466b8b" (UID: "6c781aeb-a3ac-4a08-a055-ed2846466b8b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.522653 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.522692 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c781aeb-a3ac-4a08-a055-ed2846466b8b-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.522708 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz7rg\" (UniqueName: \"kubernetes.io/projected/6c781aeb-a3ac-4a08-a055-ed2846466b8b-kube-api-access-nz7rg\") on node \"crc\" DevicePath \"\"" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.812154 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" event={"ID":"6c781aeb-a3ac-4a08-a055-ed2846466b8b","Type":"ContainerDied","Data":"313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f"} Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.812309 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.813767 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313d8040670653f599967ede014571b36ebe520c6fde444229ce63d92c816c9f" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.930114 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5"] Feb 18 00:57:34 crc kubenswrapper[4847]: E0218 00:57:34.930838 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c781aeb-a3ac-4a08-a055-ed2846466b8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.930867 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c781aeb-a3ac-4a08-a055-ed2846466b8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.931191 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c781aeb-a3ac-4a08-a055-ed2846466b8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.932195 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.934687 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.935338 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.935445 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.936192 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:57:34 crc kubenswrapper[4847]: I0218 00:57:34.943502 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5"] Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.033447 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.033793 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbhc\" (UniqueName: \"kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.033945 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.136492 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbhc\" (UniqueName: \"kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.136580 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.136781 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.143485 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.143928 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.169476 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbhc\" (UniqueName: \"kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gdvd5\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.265691 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:57:35 crc kubenswrapper[4847]: I0218 00:57:35.872821 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5"] Feb 18 00:57:35 crc kubenswrapper[4847]: W0218 00:57:35.891345 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecbd2804_8c74_4962_ad9c_48f261845f8c.slice/crio-deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d WatchSource:0}: Error finding container deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d: Status 404 returned error can't find the container with id deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d Feb 18 00:57:36 crc kubenswrapper[4847]: E0218 00:57:36.408760 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:57:36 crc kubenswrapper[4847]: I0218 00:57:36.834675 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" event={"ID":"ecbd2804-8c74-4962-ad9c-48f261845f8c","Type":"ContainerStarted","Data":"6761dfd4dfea1302018fa10317b3de10d93372e5f35785171df3ffd9c281011b"} Feb 18 00:57:36 crc kubenswrapper[4847]: I0218 00:57:36.835256 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" event={"ID":"ecbd2804-8c74-4962-ad9c-48f261845f8c","Type":"ContainerStarted","Data":"deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d"} Feb 18 00:57:36 crc kubenswrapper[4847]: I0218 00:57:36.865300 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" podStartSLOduration=2.441981005 podStartE2EDuration="2.865279139s" podCreationTimestamp="2026-02-18 00:57:34 +0000 UTC" firstStartedPulling="2026-02-18 00:57:35.895669289 +0000 UTC m=+1929.273020241" lastFinishedPulling="2026-02-18 00:57:36.318967413 +0000 UTC m=+1929.696318375" observedRunningTime="2026-02-18 00:57:36.864559371 +0000 UTC m=+1930.241910323" watchObservedRunningTime="2026-02-18 00:57:36.865279139 +0000 UTC m=+1930.242630091" Feb 18 00:57:42 crc kubenswrapper[4847]: E0218 00:57:42.406314 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.063434 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5p2lc"] Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.084703 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-hjjdx"] Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.111734 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6fa8-account-create-update-8g6f8"] Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.127119 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5p2lc"] Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.145545 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-hjjdx"] Feb 18 00:57:46 crc kubenswrapper[4847]: I0218 00:57:46.158069 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6fa8-account-create-update-8g6f8"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.048817 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-061e-account-create-update-khvj9"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.061131 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xg9hr"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.069170 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-521e-account-create-update-crkzh"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.078783 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-521e-account-create-update-crkzh"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.088476 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-061e-account-create-update-khvj9"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.097018 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xg9hr"] Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.432199 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc" path="/var/lib/kubelet/pods/0ae4a9ca-d932-4d6d-bac3-99d5e4e87abc/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.436748 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3" path="/var/lib/kubelet/pods/0bc2d0f2-add8-49b1-871b-e1b2db0b2cd3/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.438422 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="457ab17e-eca3-40eb-9116-fb82cbbcc65f" path="/var/lib/kubelet/pods/457ab17e-eca3-40eb-9116-fb82cbbcc65f/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.439736 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b4e4a8-e7ee-4541-92de-2a0fe41f879b" path="/var/lib/kubelet/pods/66b4e4a8-e7ee-4541-92de-2a0fe41f879b/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.441993 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42b82b8-55dd-48ab-86ea-0c50c940c8f8" path="/var/lib/kubelet/pods/a42b82b8-55dd-48ab-86ea-0c50c940c8f8/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: I0218 00:57:47.443232 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec96268-406b-4a40-8825-9e3f0938d457" path="/var/lib/kubelet/pods/dec96268-406b-4a40-8825-9e3f0938d457/volumes" Feb 18 00:57:47 crc kubenswrapper[4847]: E0218 00:57:47.566019 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:47 crc kubenswrapper[4847]: E0218 00:57:47.566119 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 00:57:47 crc kubenswrapper[4847]: E0218 00:57:47.567404 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 00:57:47 crc kubenswrapper[4847]: E0218 00:57:47.569100 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.046414 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4238-account-create-update-jdjnk"] Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.061240 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-d6zdc"] Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.075100 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-d6zdc"] Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.086548 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4238-account-create-update-jdjnk"] Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.096823 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rfvrp"] Feb 18 00:57:50 crc kubenswrapper[4847]: I0218 00:57:50.109885 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rfvrp"] Feb 18 00:57:51 crc kubenswrapper[4847]: I0218 00:57:51.417835 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ae3c38-4d5f-4601-a112-4b11ec4324b2" path="/var/lib/kubelet/pods/28ae3c38-4d5f-4601-a112-4b11ec4324b2/volumes" Feb 18 00:57:51 crc kubenswrapper[4847]: I0218 00:57:51.418897 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17f9edc-950a-4930-a9e8-cb5accfebfd0" path="/var/lib/kubelet/pods/b17f9edc-950a-4930-a9e8-cb5accfebfd0/volumes" Feb 18 00:57:51 crc kubenswrapper[4847]: I0218 00:57:51.419804 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b181ca27-a468-4527-a748-5cf4ac36fdb6" path="/var/lib/kubelet/pods/b181ca27-a468-4527-a748-5cf4ac36fdb6/volumes" Feb 18 00:57:53 crc kubenswrapper[4847]: I0218 00:57:53.037540 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qldtf"] Feb 18 00:57:53 crc kubenswrapper[4847]: I0218 00:57:53.048497 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qldtf"] Feb 18 00:57:53 crc kubenswrapper[4847]: I0218 00:57:53.423484 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c4f757-8241-4268-92af-da05a6e0217e" path="/var/lib/kubelet/pods/60c4f757-8241-4268-92af-da05a6e0217e/volumes" Feb 18 00:57:53 crc kubenswrapper[4847]: I0218 00:57:53.492315 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:57:53 crc kubenswrapper[4847]: I0218 00:57:53.492428 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:57:56 crc kubenswrapper[4847]: E0218 00:57:56.408658 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:57:58 crc kubenswrapper[4847]: I0218 00:57:58.045023 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-j4gvn"] Feb 18 00:57:58 crc kubenswrapper[4847]: I0218 00:57:58.057534 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-j4gvn"] Feb 18 00:57:59 crc kubenswrapper[4847]: I0218 00:57:59.422346 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="191319b2-ff52-494a-8ba9-a7402cc0dda7" path="/var/lib/kubelet/pods/191319b2-ff52-494a-8ba9-a7402cc0dda7/volumes" Feb 18 00:58:02 crc kubenswrapper[4847]: E0218 00:58:02.407591 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:58:07 crc kubenswrapper[4847]: E0218 00:58:07.416360 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:58:14 crc kubenswrapper[4847]: I0218 00:58:14.334398 4847 generic.go:334] "Generic (PLEG): container finished" podID="ecbd2804-8c74-4962-ad9c-48f261845f8c" containerID="6761dfd4dfea1302018fa10317b3de10d93372e5f35785171df3ffd9c281011b" exitCode=0 Feb 18 00:58:14 crc kubenswrapper[4847]: I0218 00:58:14.334490 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" event={"ID":"ecbd2804-8c74-4962-ad9c-48f261845f8c","Type":"ContainerDied","Data":"6761dfd4dfea1302018fa10317b3de10d93372e5f35785171df3ffd9c281011b"} Feb 18 00:58:15 crc kubenswrapper[4847]: E0218 00:58:15.412362 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:58:15 crc kubenswrapper[4847]: I0218 00:58:15.892044 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.016335 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bbhc\" (UniqueName: \"kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc\") pod \"ecbd2804-8c74-4962-ad9c-48f261845f8c\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.016562 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory\") pod \"ecbd2804-8c74-4962-ad9c-48f261845f8c\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.016624 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam\") pod \"ecbd2804-8c74-4962-ad9c-48f261845f8c\" (UID: \"ecbd2804-8c74-4962-ad9c-48f261845f8c\") " Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.022959 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc" (OuterVolumeSpecName: "kube-api-access-9bbhc") pod "ecbd2804-8c74-4962-ad9c-48f261845f8c" (UID: "ecbd2804-8c74-4962-ad9c-48f261845f8c"). InnerVolumeSpecName "kube-api-access-9bbhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.049806 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ecbd2804-8c74-4962-ad9c-48f261845f8c" (UID: "ecbd2804-8c74-4962-ad9c-48f261845f8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.054907 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory" (OuterVolumeSpecName: "inventory") pod "ecbd2804-8c74-4962-ad9c-48f261845f8c" (UID: "ecbd2804-8c74-4962-ad9c-48f261845f8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.119877 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.119926 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ecbd2804-8c74-4962-ad9c-48f261845f8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.119943 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bbhc\" (UniqueName: \"kubernetes.io/projected/ecbd2804-8c74-4962-ad9c-48f261845f8c-kube-api-access-9bbhc\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.361464 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" event={"ID":"ecbd2804-8c74-4962-ad9c-48f261845f8c","Type":"ContainerDied","Data":"deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d"} Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.361517 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deebc8eaa0164a82ae32e8cd657fb908432f9b210fd7c518acca9008ded4b69d" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.361528 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gdvd5" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.462779 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8"] Feb 18 00:58:16 crc kubenswrapper[4847]: E0218 00:58:16.463379 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbd2804-8c74-4962-ad9c-48f261845f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.463400 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbd2804-8c74-4962-ad9c-48f261845f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.468492 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbd2804-8c74-4962-ad9c-48f261845f8c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.469452 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.475707 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.475821 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.475989 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.476233 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.507832 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8"] Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.633217 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.633380 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwnf\" (UniqueName: \"kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.633516 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.735286 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.735449 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.735515 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hwnf\" (UniqueName: \"kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.745490 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.746077 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.770997 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hwnf\" (UniqueName: \"kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:16 crc kubenswrapper[4847]: I0218 00:58:16.789081 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:17 crc kubenswrapper[4847]: I0218 00:58:17.444753 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8"] Feb 18 00:58:18 crc kubenswrapper[4847]: I0218 00:58:18.388659 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" event={"ID":"e23c4cee-99f1-44ba-8070-565c0a433487","Type":"ContainerStarted","Data":"494211c766332cfc91f1b37b10f30fec910ec4012060eb0ae56c7ffca51e12b4"} Feb 18 00:58:18 crc kubenswrapper[4847]: I0218 00:58:18.389001 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" event={"ID":"e23c4cee-99f1-44ba-8070-565c0a433487","Type":"ContainerStarted","Data":"d583dec158c8fcf23664a4dd72a783281f5f5d29451a4ca99ca990f29feb4311"} Feb 18 00:58:18 crc kubenswrapper[4847]: I0218 00:58:18.410950 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" podStartSLOduration=1.965430617 podStartE2EDuration="2.410929757s" podCreationTimestamp="2026-02-18 00:58:16 +0000 UTC" firstStartedPulling="2026-02-18 00:58:17.449739336 +0000 UTC m=+1970.827090288" lastFinishedPulling="2026-02-18 00:58:17.895238476 +0000 UTC m=+1971.272589428" observedRunningTime="2026-02-18 00:58:18.404903788 +0000 UTC m=+1971.782254730" watchObservedRunningTime="2026-02-18 00:58:18.410929757 +0000 UTC m=+1971.788280699" Feb 18 00:58:21 crc kubenswrapper[4847]: E0218 00:58:21.407431 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:58:22 crc kubenswrapper[4847]: I0218 00:58:22.431793 4847 generic.go:334] "Generic (PLEG): container finished" podID="e23c4cee-99f1-44ba-8070-565c0a433487" containerID="494211c766332cfc91f1b37b10f30fec910ec4012060eb0ae56c7ffca51e12b4" exitCode=0 Feb 18 00:58:22 crc kubenswrapper[4847]: I0218 00:58:22.431924 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" event={"ID":"e23c4cee-99f1-44ba-8070-565c0a433487","Type":"ContainerDied","Data":"494211c766332cfc91f1b37b10f30fec910ec4012060eb0ae56c7ffca51e12b4"} Feb 18 00:58:23 crc kubenswrapper[4847]: I0218 00:58:23.492188 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:58:23 crc kubenswrapper[4847]: I0218 00:58:23.492291 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.063493 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.229714 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam\") pod \"e23c4cee-99f1-44ba-8070-565c0a433487\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.229811 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory\") pod \"e23c4cee-99f1-44ba-8070-565c0a433487\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.230167 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hwnf\" (UniqueName: \"kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf\") pod \"e23c4cee-99f1-44ba-8070-565c0a433487\" (UID: \"e23c4cee-99f1-44ba-8070-565c0a433487\") " Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.236139 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf" (OuterVolumeSpecName: "kube-api-access-5hwnf") pod "e23c4cee-99f1-44ba-8070-565c0a433487" (UID: "e23c4cee-99f1-44ba-8070-565c0a433487"). InnerVolumeSpecName "kube-api-access-5hwnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.271784 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory" (OuterVolumeSpecName: "inventory") pod "e23c4cee-99f1-44ba-8070-565c0a433487" (UID: "e23c4cee-99f1-44ba-8070-565c0a433487"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.272617 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e23c4cee-99f1-44ba-8070-565c0a433487" (UID: "e23c4cee-99f1-44ba-8070-565c0a433487"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.333628 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hwnf\" (UniqueName: \"kubernetes.io/projected/e23c4cee-99f1-44ba-8070-565c0a433487-kube-api-access-5hwnf\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.333860 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.333927 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e23c4cee-99f1-44ba-8070-565c0a433487-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.463930 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" event={"ID":"e23c4cee-99f1-44ba-8070-565c0a433487","Type":"ContainerDied","Data":"d583dec158c8fcf23664a4dd72a783281f5f5d29451a4ca99ca990f29feb4311"} Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.463979 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d583dec158c8fcf23664a4dd72a783281f5f5d29451a4ca99ca990f29feb4311" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.464031 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.558463 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7"] Feb 18 00:58:24 crc kubenswrapper[4847]: E0218 00:58:24.559277 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e23c4cee-99f1-44ba-8070-565c0a433487" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.559292 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e23c4cee-99f1-44ba-8070-565c0a433487" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.559495 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23c4cee-99f1-44ba-8070-565c0a433487" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.560342 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.567238 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.567639 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.567806 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.567979 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.570773 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7"] Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.644838 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8v7\" (UniqueName: \"kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.644946 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.645221 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.747083 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8v7\" (UniqueName: \"kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.747156 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.747240 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.754066 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.756220 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.765807 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8v7\" (UniqueName: \"kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:24 crc kubenswrapper[4847]: I0218 00:58:24.893366 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:58:25 crc kubenswrapper[4847]: I0218 00:58:25.549859 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7"] Feb 18 00:58:25 crc kubenswrapper[4847]: W0218 00:58:25.551740 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd45f5c66_8268_498f_8c61_4c6c33cc1c28.slice/crio-cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3 WatchSource:0}: Error finding container cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3: Status 404 returned error can't find the container with id cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3 Feb 18 00:58:26 crc kubenswrapper[4847]: I0218 00:58:26.085955 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-czdg6"] Feb 18 00:58:26 crc kubenswrapper[4847]: I0218 00:58:26.098612 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-czdg6"] Feb 18 00:58:26 crc kubenswrapper[4847]: I0218 00:58:26.490479 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" event={"ID":"d45f5c66-8268-498f-8c61-4c6c33cc1c28","Type":"ContainerStarted","Data":"cef189f981b48ff0de4b2b6bf09ec2dd0296d3fc863e3e10b4a33013e2e1c5d5"} Feb 18 00:58:26 crc kubenswrapper[4847]: I0218 00:58:26.490545 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" event={"ID":"d45f5c66-8268-498f-8c61-4c6c33cc1c28","Type":"ContainerStarted","Data":"cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3"} Feb 18 00:58:26 crc kubenswrapper[4847]: I0218 00:58:26.512058 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" podStartSLOduration=2.120904375 podStartE2EDuration="2.512029485s" podCreationTimestamp="2026-02-18 00:58:24 +0000 UTC" firstStartedPulling="2026-02-18 00:58:25.555110669 +0000 UTC m=+1978.932461641" lastFinishedPulling="2026-02-18 00:58:25.946235769 +0000 UTC m=+1979.323586751" observedRunningTime="2026-02-18 00:58:26.511883302 +0000 UTC m=+1979.889234294" watchObservedRunningTime="2026-02-18 00:58:26.512029485 +0000 UTC m=+1979.889380467" Feb 18 00:58:27 crc kubenswrapper[4847]: I0218 00:58:27.421905 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d60eb1ff-80a3-47ca-b223-3aa7c7a310c0" path="/var/lib/kubelet/pods/d60eb1ff-80a3-47ca-b223-3aa7c7a310c0/volumes" Feb 18 00:58:27 crc kubenswrapper[4847]: E0218 00:58:27.423328 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.044147 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8dkg7"] Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.059744 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8dkg7"] Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.132536 4847 scope.go:117] "RemoveContainer" containerID="811b0c367e9c7a7af816d1f79a67443f9720d4729e096b1f1acd97e121fdeb5a" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.177144 4847 scope.go:117] "RemoveContainer" containerID="ac111aa6c0138e32c622c988694353ec1f13a86e2550d627ceaebb4c4ddfad61" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.246617 4847 scope.go:117] "RemoveContainer" containerID="936574ae9481f5f0ae4568202bdfe99d73e60a7ad57f5e720681c4ef93b4d915" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.297095 4847 scope.go:117] "RemoveContainer" containerID="d6f883501b59c3e4144f5c290125a27a0f6fbaaac7d5530377a4015b555eef77" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.348478 4847 scope.go:117] "RemoveContainer" containerID="3a8f6bbbf7f5aad6401cf8f265f089e07966cc21bc68f74bf3ffadef365eb04f" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.412397 4847 scope.go:117] "RemoveContainer" containerID="c116e5094ea3264493552ea528dae9d7e9f0ae637beb77a1de03399ef2398e62" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.456953 4847 scope.go:117] "RemoveContainer" containerID="1cdf146627b3a94206f70eefd5763c81eeb7990652bf09baea080a9fca8bbfc8" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.491741 4847 scope.go:117] "RemoveContainer" containerID="0650f42516b95dea8ce9c207fbb0b7d69ecfc556bf7d17df21d835faa5393835" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.516995 4847 scope.go:117] "RemoveContainer" containerID="0ac8d7dcda0f6c26811b48165e3e0ce82324aafe16f868ad396207a942d24ffe" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.543028 4847 scope.go:117] "RemoveContainer" containerID="c20fb1f015a90907f1e0323b53be169d1863d01e313db1e48e5cdeb650cb04fb" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.568146 4847 scope.go:117] "RemoveContainer" containerID="b2f49f1f7519947baee7fc615ba3fc4702cee4e29361a070f206a71ef9d82eb6" Feb 18 00:58:32 crc kubenswrapper[4847]: I0218 00:58:32.610107 4847 scope.go:117] "RemoveContainer" containerID="a25fd2e270eee2583d85f4e223f799766a2f4cf6ee32004726bba00921f310d2" Feb 18 00:58:33 crc kubenswrapper[4847]: E0218 00:58:33.406782 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:58:33 crc kubenswrapper[4847]: I0218 00:58:33.427669 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de1208a0-7171-4d36-af50-a33f03208e5d" path="/var/lib/kubelet/pods/de1208a0-7171-4d36-af50-a33f03208e5d/volumes" Feb 18 00:58:35 crc kubenswrapper[4847]: I0218 00:58:35.040840 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g2d77"] Feb 18 00:58:35 crc kubenswrapper[4847]: I0218 00:58:35.051752 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g2d77"] Feb 18 00:58:35 crc kubenswrapper[4847]: I0218 00:58:35.431518 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5436dc2-1c05-46b4-9b91-c70bee8c4126" path="/var/lib/kubelet/pods/c5436dc2-1c05-46b4-9b91-c70bee8c4126/volumes" Feb 18 00:58:38 crc kubenswrapper[4847]: I0218 00:58:38.059808 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pd45p"] Feb 18 00:58:38 crc kubenswrapper[4847]: I0218 00:58:38.076289 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pd45p"] Feb 18 00:58:39 crc kubenswrapper[4847]: I0218 00:58:39.426641 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca59c512-1360-4daf-9ee3-9c5c7cd143e1" path="/var/lib/kubelet/pods/ca59c512-1360-4daf-9ee3-9c5c7cd143e1/volumes" Feb 18 00:58:40 crc kubenswrapper[4847]: E0218 00:58:40.408545 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:58:45 crc kubenswrapper[4847]: E0218 00:58:45.407432 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.062101 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.078522 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.088706 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.261245 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgk2t\" (UniqueName: \"kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.261815 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.262115 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.364698 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgk2t\" (UniqueName: \"kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.364838 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.364964 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.365800 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.365899 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.401133 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgk2t\" (UniqueName: \"kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t\") pod \"redhat-marketplace-55rvt\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.418391 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:47 crc kubenswrapper[4847]: I0218 00:58:47.930313 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:58:48 crc kubenswrapper[4847]: I0218 00:58:48.835366 4847 generic.go:334] "Generic (PLEG): container finished" podID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerID="4cd98f06f11506dd97d02157a2bf7181a6c2d55ffe588b6df21b4b9385a64ffe" exitCode=0 Feb 18 00:58:48 crc kubenswrapper[4847]: I0218 00:58:48.835493 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerDied","Data":"4cd98f06f11506dd97d02157a2bf7181a6c2d55ffe588b6df21b4b9385a64ffe"} Feb 18 00:58:48 crc kubenswrapper[4847]: I0218 00:58:48.835785 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerStarted","Data":"81bb5cf5c5d712310d7c27bbb70bdaec5c96a12c792417631d6671653d304028"} Feb 18 00:58:49 crc kubenswrapper[4847]: I0218 00:58:49.851378 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerStarted","Data":"210ee29e710c042c4ac53e481089c073a2e90dc169aaa46d75fff11d11b4ce57"} Feb 18 00:58:51 crc kubenswrapper[4847]: I0218 00:58:51.050327 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qxdsw"] Feb 18 00:58:51 crc kubenswrapper[4847]: I0218 00:58:51.067493 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qxdsw"] Feb 18 00:58:51 crc kubenswrapper[4847]: I0218 00:58:51.418527 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40815e0-c0e4-4265-94f8-c9c7b262a011" path="/var/lib/kubelet/pods/e40815e0-c0e4-4265-94f8-c9c7b262a011/volumes" Feb 18 00:58:51 crc kubenswrapper[4847]: I0218 00:58:51.874713 4847 generic.go:334] "Generic (PLEG): container finished" podID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerID="210ee29e710c042c4ac53e481089c073a2e90dc169aaa46d75fff11d11b4ce57" exitCode=0 Feb 18 00:58:51 crc kubenswrapper[4847]: I0218 00:58:51.874771 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerDied","Data":"210ee29e710c042c4ac53e481089c073a2e90dc169aaa46d75fff11d11b4ce57"} Feb 18 00:58:52 crc kubenswrapper[4847]: I0218 00:58:52.886849 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerStarted","Data":"b131d1e9f3236d2c145aad94cf204b14d618f3d720ef00514f7e268007290bca"} Feb 18 00:58:52 crc kubenswrapper[4847]: I0218 00:58:52.920363 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-55rvt" podStartSLOduration=2.326495323 podStartE2EDuration="5.920343225s" podCreationTimestamp="2026-02-18 00:58:47 +0000 UTC" firstStartedPulling="2026-02-18 00:58:48.838536255 +0000 UTC m=+2002.215887197" lastFinishedPulling="2026-02-18 00:58:52.432384147 +0000 UTC m=+2005.809735099" observedRunningTime="2026-02-18 00:58:52.911403604 +0000 UTC m=+2006.288754546" watchObservedRunningTime="2026-02-18 00:58:52.920343225 +0000 UTC m=+2006.297694167" Feb 18 00:58:53 crc kubenswrapper[4847]: E0218 00:58:53.406994 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.491712 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.491797 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.491865 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.492946 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.493018 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769" gracePeriod=600 Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.904065 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769" exitCode=0 Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.904233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769"} Feb 18 00:58:53 crc kubenswrapper[4847]: I0218 00:58:53.904367 4847 scope.go:117] "RemoveContainer" containerID="c97c189ff46ad9fbd37848b34bb97e0583b54ef70bddd4683460d710c1d9136d" Feb 18 00:58:54 crc kubenswrapper[4847]: I0218 00:58:54.920967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970"} Feb 18 00:58:57 crc kubenswrapper[4847]: E0218 00:58:57.413933 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:58:57 crc kubenswrapper[4847]: I0218 00:58:57.424311 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:57 crc kubenswrapper[4847]: I0218 00:58:57.424409 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:58:58 crc kubenswrapper[4847]: I0218 00:58:58.476094 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-55rvt" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="registry-server" probeResult="failure" output=< Feb 18 00:58:58 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 00:58:58 crc kubenswrapper[4847]: > Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.433333 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.458285 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.458485 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.639992 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.640100 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.640252 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjdj\" (UniqueName: \"kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.742843 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gjdj\" (UniqueName: \"kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.742991 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.743047 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.743510 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.743683 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.775125 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gjdj\" (UniqueName: \"kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj\") pod \"community-operators-twxpt\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:05 crc kubenswrapper[4847]: I0218 00:59:05.784050 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:06 crc kubenswrapper[4847]: W0218 00:59:06.409057 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2529e3c3_1221_4d81_959f_0aa3844dead6.slice/crio-e7546374e7d061fe5ae248c80808db0d5753754c1f0aa67b307aaa5f575318a4 WatchSource:0}: Error finding container e7546374e7d061fe5ae248c80808db0d5753754c1f0aa67b307aaa5f575318a4: Status 404 returned error can't find the container with id e7546374e7d061fe5ae248c80808db0d5753754c1f0aa67b307aaa5f575318a4 Feb 18 00:59:06 crc kubenswrapper[4847]: I0218 00:59:06.411241 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:07 crc kubenswrapper[4847]: I0218 00:59:07.079879 4847 generic.go:334] "Generic (PLEG): container finished" podID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerID="3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce" exitCode=0 Feb 18 00:59:07 crc kubenswrapper[4847]: I0218 00:59:07.080233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerDied","Data":"3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce"} Feb 18 00:59:07 crc kubenswrapper[4847]: I0218 00:59:07.080284 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerStarted","Data":"e7546374e7d061fe5ae248c80808db0d5753754c1f0aa67b307aaa5f575318a4"} Feb 18 00:59:07 crc kubenswrapper[4847]: E0218 00:59:07.421698 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:59:07 crc kubenswrapper[4847]: I0218 00:59:07.495683 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:59:07 crc kubenswrapper[4847]: I0218 00:59:07.572938 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:59:08 crc kubenswrapper[4847]: I0218 00:59:08.103895 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerStarted","Data":"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280"} Feb 18 00:59:09 crc kubenswrapper[4847]: I0218 00:59:09.770330 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:59:09 crc kubenswrapper[4847]: I0218 00:59:09.771317 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-55rvt" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="registry-server" containerID="cri-o://b131d1e9f3236d2c145aad94cf204b14d618f3d720ef00514f7e268007290bca" gracePeriod=2 Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.136534 4847 generic.go:334] "Generic (PLEG): container finished" podID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerID="497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280" exitCode=0 Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.136625 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerDied","Data":"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280"} Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.146752 4847 generic.go:334] "Generic (PLEG): container finished" podID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerID="b131d1e9f3236d2c145aad94cf204b14d618f3d720ef00514f7e268007290bca" exitCode=0 Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.146804 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerDied","Data":"b131d1e9f3236d2c145aad94cf204b14d618f3d720ef00514f7e268007290bca"} Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.344002 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.460336 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities\") pod \"628c5d1c-326b-482e-82f8-3ffa4e27449f\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.460426 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgk2t\" (UniqueName: \"kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t\") pod \"628c5d1c-326b-482e-82f8-3ffa4e27449f\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.460478 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content\") pod \"628c5d1c-326b-482e-82f8-3ffa4e27449f\" (UID: \"628c5d1c-326b-482e-82f8-3ffa4e27449f\") " Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.461244 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities" (OuterVolumeSpecName: "utilities") pod "628c5d1c-326b-482e-82f8-3ffa4e27449f" (UID: "628c5d1c-326b-482e-82f8-3ffa4e27449f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.470178 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t" (OuterVolumeSpecName: "kube-api-access-dgk2t") pod "628c5d1c-326b-482e-82f8-3ffa4e27449f" (UID: "628c5d1c-326b-482e-82f8-3ffa4e27449f"). InnerVolumeSpecName "kube-api-access-dgk2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.489539 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "628c5d1c-326b-482e-82f8-3ffa4e27449f" (UID: "628c5d1c-326b-482e-82f8-3ffa4e27449f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.562930 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.562969 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgk2t\" (UniqueName: \"kubernetes.io/projected/628c5d1c-326b-482e-82f8-3ffa4e27449f-kube-api-access-dgk2t\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:10 crc kubenswrapper[4847]: I0218 00:59:10.562983 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628c5d1c-326b-482e-82f8-3ffa4e27449f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.169191 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-55rvt" event={"ID":"628c5d1c-326b-482e-82f8-3ffa4e27449f","Type":"ContainerDied","Data":"81bb5cf5c5d712310d7c27bbb70bdaec5c96a12c792417631d6671653d304028"} Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.169455 4847 scope.go:117] "RemoveContainer" containerID="b131d1e9f3236d2c145aad94cf204b14d618f3d720ef00514f7e268007290bca" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.169618 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-55rvt" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.178353 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerStarted","Data":"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a"} Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.214551 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-twxpt" podStartSLOduration=2.658602703 podStartE2EDuration="6.214527175s" podCreationTimestamp="2026-02-18 00:59:05 +0000 UTC" firstStartedPulling="2026-02-18 00:59:07.084327159 +0000 UTC m=+2020.461678141" lastFinishedPulling="2026-02-18 00:59:10.640251641 +0000 UTC m=+2024.017602613" observedRunningTime="2026-02-18 00:59:11.208142072 +0000 UTC m=+2024.585493054" watchObservedRunningTime="2026-02-18 00:59:11.214527175 +0000 UTC m=+2024.591878127" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.217639 4847 scope.go:117] "RemoveContainer" containerID="210ee29e710c042c4ac53e481089c073a2e90dc169aaa46d75fff11d11b4ce57" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.241519 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.251327 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-55rvt"] Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.263827 4847 scope.go:117] "RemoveContainer" containerID="4cd98f06f11506dd97d02157a2bf7181a6c2d55ffe588b6df21b4b9385a64ffe" Feb 18 00:59:11 crc kubenswrapper[4847]: I0218 00:59:11.419330 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" path="/var/lib/kubelet/pods/628c5d1c-326b-482e-82f8-3ffa4e27449f/volumes" Feb 18 00:59:12 crc kubenswrapper[4847]: E0218 00:59:12.409446 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.786029 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.786888 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.868904 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.985303 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:15 crc kubenswrapper[4847]: E0218 00:59:15.986512 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="extract-utilities" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.986558 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="extract-utilities" Feb 18 00:59:15 crc kubenswrapper[4847]: E0218 00:59:15.986689 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="extract-content" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.986713 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="extract-content" Feb 18 00:59:15 crc kubenswrapper[4847]: E0218 00:59:15.986743 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="registry-server" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.986762 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="registry-server" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.987392 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="628c5d1c-326b-482e-82f8-3ffa4e27449f" containerName="registry-server" Feb 18 00:59:15 crc kubenswrapper[4847]: I0218 00:59:15.991320 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.002092 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.095730 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.095881 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l2d\" (UniqueName: \"kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.096013 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.198230 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.198526 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l2d\" (UniqueName: \"kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.198583 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.198956 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.199003 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.225931 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l2d\" (UniqueName: \"kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d\") pod \"certified-operators-mnlnx\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.319367 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.327471 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.841795 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.978967 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.985749 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:16 crc kubenswrapper[4847]: I0218 00:59:16.988917 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.137137 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.137508 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.137804 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zcg\" (UniqueName: \"kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.240580 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.240663 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.240728 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4zcg\" (UniqueName: \"kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.241250 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.241260 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.260416 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4zcg\" (UniqueName: \"kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg\") pod \"redhat-operators-5tgd9\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.262954 4847 generic.go:334] "Generic (PLEG): container finished" podID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerID="40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22" exitCode=0 Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.262994 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerDied","Data":"40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22"} Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.263053 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerStarted","Data":"40da7f7599fae13dd7ca6b68e57019ff705f5a9bfd94328a8829109dbfae1baa"} Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.322813 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:17 crc kubenswrapper[4847]: I0218 00:59:17.808980 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:17 crc kubenswrapper[4847]: W0218 00:59:17.810742 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec9c4d77_538d_4f0a_ad1c_c725e2f66209.slice/crio-5c85c3df5a46f47208e5537e7b6bb3b9edc3a4fd23ae72b3b93d47b4298886de WatchSource:0}: Error finding container 5c85c3df5a46f47208e5537e7b6bb3b9edc3a4fd23ae72b3b93d47b4298886de: Status 404 returned error can't find the container with id 5c85c3df5a46f47208e5537e7b6bb3b9edc3a4fd23ae72b3b93d47b4298886de Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.277653 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerID="1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068" exitCode=0 Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.277704 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerDied","Data":"1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068"} Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.278131 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerStarted","Data":"5c85c3df5a46f47208e5537e7b6bb3b9edc3a4fd23ae72b3b93d47b4298886de"} Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.284232 4847 generic.go:334] "Generic (PLEG): container finished" podID="d45f5c66-8268-498f-8c61-4c6c33cc1c28" containerID="cef189f981b48ff0de4b2b6bf09ec2dd0296d3fc863e3e10b4a33013e2e1c5d5" exitCode=0 Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.284276 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" event={"ID":"d45f5c66-8268-498f-8c61-4c6c33cc1c28","Type":"ContainerDied","Data":"cef189f981b48ff0de4b2b6bf09ec2dd0296d3fc863e3e10b4a33013e2e1c5d5"} Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.367171 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.367651 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-twxpt" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="registry-server" containerID="cri-o://ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a" gracePeriod=2 Feb 18 00:59:18 crc kubenswrapper[4847]: I0218 00:59:18.905025 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.006349 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content\") pod \"2529e3c3-1221-4d81-959f-0aa3844dead6\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.006463 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gjdj\" (UniqueName: \"kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj\") pod \"2529e3c3-1221-4d81-959f-0aa3844dead6\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.006729 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities\") pod \"2529e3c3-1221-4d81-959f-0aa3844dead6\" (UID: \"2529e3c3-1221-4d81-959f-0aa3844dead6\") " Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.008006 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities" (OuterVolumeSpecName: "utilities") pod "2529e3c3-1221-4d81-959f-0aa3844dead6" (UID: "2529e3c3-1221-4d81-959f-0aa3844dead6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.013426 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj" (OuterVolumeSpecName: "kube-api-access-6gjdj") pod "2529e3c3-1221-4d81-959f-0aa3844dead6" (UID: "2529e3c3-1221-4d81-959f-0aa3844dead6"). InnerVolumeSpecName "kube-api-access-6gjdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.084434 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2529e3c3-1221-4d81-959f-0aa3844dead6" (UID: "2529e3c3-1221-4d81-959f-0aa3844dead6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.109977 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.110023 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gjdj\" (UniqueName: \"kubernetes.io/projected/2529e3c3-1221-4d81-959f-0aa3844dead6-kube-api-access-6gjdj\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.110044 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2529e3c3-1221-4d81-959f-0aa3844dead6-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.311525 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerStarted","Data":"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e"} Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.314878 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerStarted","Data":"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98"} Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.318450 4847 generic.go:334] "Generic (PLEG): container finished" podID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerID="ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a" exitCode=0 Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.318536 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerDied","Data":"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a"} Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.318612 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-twxpt" event={"ID":"2529e3c3-1221-4d81-959f-0aa3844dead6","Type":"ContainerDied","Data":"e7546374e7d061fe5ae248c80808db0d5753754c1f0aa67b307aaa5f575318a4"} Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.318560 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-twxpt" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.318641 4847 scope.go:117] "RemoveContainer" containerID="ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.364254 4847 scope.go:117] "RemoveContainer" containerID="497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.401319 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.402048 4847 scope.go:117] "RemoveContainer" containerID="3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce" Feb 18 00:59:19 crc kubenswrapper[4847]: E0218 00:59:19.405892 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.421514 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-twxpt"] Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.449415 4847 scope.go:117] "RemoveContainer" containerID="ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a" Feb 18 00:59:19 crc kubenswrapper[4847]: E0218 00:59:19.451009 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a\": container with ID starting with ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a not found: ID does not exist" containerID="ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.451053 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a"} err="failed to get container status \"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a\": rpc error: code = NotFound desc = could not find container \"ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a\": container with ID starting with ee7cb98d621eaf9e6edf76baab9166c99b98eb2ddaffeb433bad0f17205e487a not found: ID does not exist" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.451077 4847 scope.go:117] "RemoveContainer" containerID="497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280" Feb 18 00:59:19 crc kubenswrapper[4847]: E0218 00:59:19.453510 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280\": container with ID starting with 497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280 not found: ID does not exist" containerID="497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.453535 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280"} err="failed to get container status \"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280\": rpc error: code = NotFound desc = could not find container \"497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280\": container with ID starting with 497833777a7977cd8ef88c4667c79104e3fb9bfdb3a4d0b506b90526b7ceb280 not found: ID does not exist" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.453556 4847 scope.go:117] "RemoveContainer" containerID="3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce" Feb 18 00:59:19 crc kubenswrapper[4847]: E0218 00:59:19.455726 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce\": container with ID starting with 3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce not found: ID does not exist" containerID="3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.455778 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce"} err="failed to get container status \"3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce\": rpc error: code = NotFound desc = could not find container \"3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce\": container with ID starting with 3be931ddfef8fbf59e2322456a3ce299959f36c4b0c650d13026fb114b3769ce not found: ID does not exist" Feb 18 00:59:19 crc kubenswrapper[4847]: I0218 00:59:19.925703 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.031365 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory\") pod \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.031746 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8v7\" (UniqueName: \"kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7\") pod \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.031826 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam\") pod \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\" (UID: \"d45f5c66-8268-498f-8c61-4c6c33cc1c28\") " Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.038588 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7" (OuterVolumeSpecName: "kube-api-access-td8v7") pod "d45f5c66-8268-498f-8c61-4c6c33cc1c28" (UID: "d45f5c66-8268-498f-8c61-4c6c33cc1c28"). InnerVolumeSpecName "kube-api-access-td8v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.059299 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d45f5c66-8268-498f-8c61-4c6c33cc1c28" (UID: "d45f5c66-8268-498f-8c61-4c6c33cc1c28"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.083220 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory" (OuterVolumeSpecName: "inventory") pod "d45f5c66-8268-498f-8c61-4c6c33cc1c28" (UID: "d45f5c66-8268-498f-8c61-4c6c33cc1c28"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.134989 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8v7\" (UniqueName: \"kubernetes.io/projected/d45f5c66-8268-498f-8c61-4c6c33cc1c28-kube-api-access-td8v7\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.135039 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.135059 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d45f5c66-8268-498f-8c61-4c6c33cc1c28-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.334377 4847 generic.go:334] "Generic (PLEG): container finished" podID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerID="177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98" exitCode=0 Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.334536 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerDied","Data":"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98"} Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.339547 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.339643 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7" event={"ID":"d45f5c66-8268-498f-8c61-4c6c33cc1c28","Type":"ContainerDied","Data":"cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3"} Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.339827 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9f6c27316cbb98f7924d52e3a4e80dbc433a77dc63870c7ced3aac50e854a3" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.441322 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jtqrj"] Feb 18 00:59:20 crc kubenswrapper[4847]: E0218 00:59:20.441876 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="extract-content" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.441892 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="extract-content" Feb 18 00:59:20 crc kubenswrapper[4847]: E0218 00:59:20.441919 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45f5c66-8268-498f-8c61-4c6c33cc1c28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.441928 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45f5c66-8268-498f-8c61-4c6c33cc1c28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:20 crc kubenswrapper[4847]: E0218 00:59:20.441949 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="registry-server" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.441954 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="registry-server" Feb 18 00:59:20 crc kubenswrapper[4847]: E0218 00:59:20.441966 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="extract-utilities" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.441971 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="extract-utilities" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.442164 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45f5c66-8268-498f-8c61-4c6c33cc1c28" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.442181 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" containerName="registry-server" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.442987 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.446907 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.447775 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.447790 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.449050 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.453249 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jtqrj"] Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.546344 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.547583 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9qjg\" (UniqueName: \"kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.547764 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.650351 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9qjg\" (UniqueName: \"kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.650862 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.650996 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.657292 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.660187 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.669776 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9qjg\" (UniqueName: \"kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg\") pod \"ssh-known-hosts-edpm-deployment-jtqrj\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:20 crc kubenswrapper[4847]: I0218 00:59:20.761586 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:21 crc kubenswrapper[4847]: I0218 00:59:21.254996 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jtqrj"] Feb 18 00:59:21 crc kubenswrapper[4847]: W0218 00:59:21.261450 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2669b9_bb6f_484b_9a1b_70c6903244c5.slice/crio-f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a WatchSource:0}: Error finding container f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a: Status 404 returned error can't find the container with id f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a Feb 18 00:59:21 crc kubenswrapper[4847]: I0218 00:59:21.357933 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" event={"ID":"6c2669b9-bb6f-484b-9a1b-70c6903244c5","Type":"ContainerStarted","Data":"f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a"} Feb 18 00:59:21 crc kubenswrapper[4847]: I0218 00:59:21.422017 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2529e3c3-1221-4d81-959f-0aa3844dead6" path="/var/lib/kubelet/pods/2529e3c3-1221-4d81-959f-0aa3844dead6/volumes" Feb 18 00:59:22 crc kubenswrapper[4847]: I0218 00:59:22.374495 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" event={"ID":"6c2669b9-bb6f-484b-9a1b-70c6903244c5","Type":"ContainerStarted","Data":"3f4574977981dfed511b6a1a04742b5f7c0d011ea264e70c32cfaf9f247e7c48"} Feb 18 00:59:22 crc kubenswrapper[4847]: I0218 00:59:22.380078 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerStarted","Data":"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55"} Feb 18 00:59:22 crc kubenswrapper[4847]: I0218 00:59:22.396906 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" podStartSLOduration=1.7151343639999999 podStartE2EDuration="2.396885634s" podCreationTimestamp="2026-02-18 00:59:20 +0000 UTC" firstStartedPulling="2026-02-18 00:59:21.269229606 +0000 UTC m=+2034.646580578" lastFinishedPulling="2026-02-18 00:59:21.950980896 +0000 UTC m=+2035.328331848" observedRunningTime="2026-02-18 00:59:22.395220974 +0000 UTC m=+2035.772571936" watchObservedRunningTime="2026-02-18 00:59:22.396885634 +0000 UTC m=+2035.774236576" Feb 18 00:59:22 crc kubenswrapper[4847]: I0218 00:59:22.420325 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mnlnx" podStartSLOduration=3.505389638 podStartE2EDuration="7.420302215s" podCreationTimestamp="2026-02-18 00:59:15 +0000 UTC" firstStartedPulling="2026-02-18 00:59:17.264987678 +0000 UTC m=+2030.642338620" lastFinishedPulling="2026-02-18 00:59:21.179900245 +0000 UTC m=+2034.557251197" observedRunningTime="2026-02-18 00:59:22.414363453 +0000 UTC m=+2035.791714395" watchObservedRunningTime="2026-02-18 00:59:22.420302215 +0000 UTC m=+2035.797653157" Feb 18 00:59:25 crc kubenswrapper[4847]: I0218 00:59:25.421956 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerID="44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e" exitCode=0 Feb 18 00:59:25 crc kubenswrapper[4847]: I0218 00:59:25.422338 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerDied","Data":"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e"} Feb 18 00:59:26 crc kubenswrapper[4847]: I0218 00:59:26.320004 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:26 crc kubenswrapper[4847]: I0218 00:59:26.320559 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:26 crc kubenswrapper[4847]: I0218 00:59:26.389064 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:26 crc kubenswrapper[4847]: I0218 00:59:26.507477 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:27 crc kubenswrapper[4847]: E0218 00:59:27.418517 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:59:27 crc kubenswrapper[4847]: I0218 00:59:27.452967 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerStarted","Data":"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052"} Feb 18 00:59:27 crc kubenswrapper[4847]: I0218 00:59:27.496430 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tgd9" podStartSLOduration=3.538579124 podStartE2EDuration="11.496397234s" podCreationTimestamp="2026-02-18 00:59:16 +0000 UTC" firstStartedPulling="2026-02-18 00:59:18.279651929 +0000 UTC m=+2031.657002881" lastFinishedPulling="2026-02-18 00:59:26.237470039 +0000 UTC m=+2039.614820991" observedRunningTime="2026-02-18 00:59:27.485961884 +0000 UTC m=+2040.863312826" watchObservedRunningTime="2026-02-18 00:59:27.496397234 +0000 UTC m=+2040.873748216" Feb 18 00:59:28 crc kubenswrapper[4847]: I0218 00:59:28.773206 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:28 crc kubenswrapper[4847]: I0218 00:59:28.774157 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mnlnx" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="registry-server" containerID="cri-o://bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55" gracePeriod=2 Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.271289 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.369091 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4l2d\" (UniqueName: \"kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d\") pod \"24d4c8b9-b48d-425d-95d4-502576fba56a\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.369482 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content\") pod \"24d4c8b9-b48d-425d-95d4-502576fba56a\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.369568 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities\") pod \"24d4c8b9-b48d-425d-95d4-502576fba56a\" (UID: \"24d4c8b9-b48d-425d-95d4-502576fba56a\") " Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.371417 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities" (OuterVolumeSpecName: "utilities") pod "24d4c8b9-b48d-425d-95d4-502576fba56a" (UID: "24d4c8b9-b48d-425d-95d4-502576fba56a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.384888 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d" (OuterVolumeSpecName: "kube-api-access-q4l2d") pod "24d4c8b9-b48d-425d-95d4-502576fba56a" (UID: "24d4c8b9-b48d-425d-95d4-502576fba56a"). InnerVolumeSpecName "kube-api-access-q4l2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.436397 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24d4c8b9-b48d-425d-95d4-502576fba56a" (UID: "24d4c8b9-b48d-425d-95d4-502576fba56a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.473370 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4l2d\" (UniqueName: \"kubernetes.io/projected/24d4c8b9-b48d-425d-95d4-502576fba56a-kube-api-access-q4l2d\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.473399 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.473409 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24d4c8b9-b48d-425d-95d4-502576fba56a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.481829 4847 generic.go:334] "Generic (PLEG): container finished" podID="6c2669b9-bb6f-484b-9a1b-70c6903244c5" containerID="3f4574977981dfed511b6a1a04742b5f7c0d011ea264e70c32cfaf9f247e7c48" exitCode=0 Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.481914 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" event={"ID":"6c2669b9-bb6f-484b-9a1b-70c6903244c5","Type":"ContainerDied","Data":"3f4574977981dfed511b6a1a04742b5f7c0d011ea264e70c32cfaf9f247e7c48"} Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.485914 4847 generic.go:334] "Generic (PLEG): container finished" podID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerID="bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55" exitCode=0 Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.485955 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerDied","Data":"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55"} Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.485985 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mnlnx" event={"ID":"24d4c8b9-b48d-425d-95d4-502576fba56a","Type":"ContainerDied","Data":"40da7f7599fae13dd7ca6b68e57019ff705f5a9bfd94328a8829109dbfae1baa"} Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.486006 4847 scope.go:117] "RemoveContainer" containerID="bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.486280 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mnlnx" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.528060 4847 scope.go:117] "RemoveContainer" containerID="177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.548482 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.558538 4847 scope.go:117] "RemoveContainer" containerID="40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.566127 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mnlnx"] Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.607271 4847 scope.go:117] "RemoveContainer" containerID="bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55" Feb 18 00:59:29 crc kubenswrapper[4847]: E0218 00:59:29.608005 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55\": container with ID starting with bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55 not found: ID does not exist" containerID="bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.608041 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55"} err="failed to get container status \"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55\": rpc error: code = NotFound desc = could not find container \"bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55\": container with ID starting with bb447423fa88e5a712ee8e9871dee09f527c4ade05475070d5a6a3f7538f4b55 not found: ID does not exist" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.608065 4847 scope.go:117] "RemoveContainer" containerID="177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98" Feb 18 00:59:29 crc kubenswrapper[4847]: E0218 00:59:29.608575 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98\": container with ID starting with 177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98 not found: ID does not exist" containerID="177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.608612 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98"} err="failed to get container status \"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98\": rpc error: code = NotFound desc = could not find container \"177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98\": container with ID starting with 177159c2cb53e0d7f43a5ee0d38149024c6a75587b53e4aa068a71ffc34f9c98 not found: ID does not exist" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.608626 4847 scope.go:117] "RemoveContainer" containerID="40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22" Feb 18 00:59:29 crc kubenswrapper[4847]: E0218 00:59:29.608897 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22\": container with ID starting with 40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22 not found: ID does not exist" containerID="40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22" Feb 18 00:59:29 crc kubenswrapper[4847]: I0218 00:59:29.608922 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22"} err="failed to get container status \"40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22\": rpc error: code = NotFound desc = could not find container \"40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22\": container with ID starting with 40293a03cd04c62ba4b3f42d419007b07445f163dce6da8a39f6d74414d9bb22 not found: ID does not exist" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.020403 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.110047 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0\") pod \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.110236 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9qjg\" (UniqueName: \"kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg\") pod \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.110312 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam\") pod \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\" (UID: \"6c2669b9-bb6f-484b-9a1b-70c6903244c5\") " Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.128885 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg" (OuterVolumeSpecName: "kube-api-access-n9qjg") pod "6c2669b9-bb6f-484b-9a1b-70c6903244c5" (UID: "6c2669b9-bb6f-484b-9a1b-70c6903244c5"). InnerVolumeSpecName "kube-api-access-n9qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.140040 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6c2669b9-bb6f-484b-9a1b-70c6903244c5" (UID: "6c2669b9-bb6f-484b-9a1b-70c6903244c5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.145027 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6c2669b9-bb6f-484b-9a1b-70c6903244c5" (UID: "6c2669b9-bb6f-484b-9a1b-70c6903244c5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.212801 4847 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.212841 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9qjg\" (UniqueName: \"kubernetes.io/projected/6c2669b9-bb6f-484b-9a1b-70c6903244c5-kube-api-access-n9qjg\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.212854 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6c2669b9-bb6f-484b-9a1b-70c6903244c5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.425785 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" path="/var/lib/kubelet/pods/24d4c8b9-b48d-425d-95d4-502576fba56a/volumes" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.523279 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" event={"ID":"6c2669b9-bb6f-484b-9a1b-70c6903244c5","Type":"ContainerDied","Data":"f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a"} Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.523330 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f229541e3a528e2b40e6f596ad5c1a2e398aa3cf7d4ea4809573ce3cd3c5620a" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.523339 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jtqrj" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.616644 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp"] Feb 18 00:59:31 crc kubenswrapper[4847]: E0218 00:59:31.617243 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="extract-utilities" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617265 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="extract-utilities" Feb 18 00:59:31 crc kubenswrapper[4847]: E0218 00:59:31.617283 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="registry-server" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617293 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="registry-server" Feb 18 00:59:31 crc kubenswrapper[4847]: E0218 00:59:31.617315 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="extract-content" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617325 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="extract-content" Feb 18 00:59:31 crc kubenswrapper[4847]: E0218 00:59:31.617350 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2669b9-bb6f-484b-9a1b-70c6903244c5" containerName="ssh-known-hosts-edpm-deployment" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617358 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2669b9-bb6f-484b-9a1b-70c6903244c5" containerName="ssh-known-hosts-edpm-deployment" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617618 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d4c8b9-b48d-425d-95d4-502576fba56a" containerName="registry-server" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.617654 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2669b9-bb6f-484b-9a1b-70c6903244c5" containerName="ssh-known-hosts-edpm-deployment" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.618590 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.620812 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.621038 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.621056 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.622056 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.629435 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp"] Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.726484 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.726700 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29h5n\" (UniqueName: \"kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.726920 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.829441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.829692 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29h5n\" (UniqueName: \"kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.830067 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.835659 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.837526 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.857994 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29h5n\" (UniqueName: \"kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8bqxp\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:31 crc kubenswrapper[4847]: I0218 00:59:31.942319 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:32 crc kubenswrapper[4847]: I0218 00:59:32.688499 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp"] Feb 18 00:59:32 crc kubenswrapper[4847]: I0218 00:59:32.937280 4847 scope.go:117] "RemoveContainer" containerID="3e5125c28a0fae1cd119f373fc2cf7de8b5b95c52c0e789c1c1ead030b8e196f" Feb 18 00:59:32 crc kubenswrapper[4847]: I0218 00:59:32.985184 4847 scope.go:117] "RemoveContainer" containerID="886a6d9bf7552aa12650386aee94fa74ce31831d72939ed8aeddb060474a946d" Feb 18 00:59:33 crc kubenswrapper[4847]: I0218 00:59:33.064659 4847 scope.go:117] "RemoveContainer" containerID="b38ffced6b3f4b1b2342c91c61f122a85b0deac1a667511704d69d1ed8f11cf4" Feb 18 00:59:33 crc kubenswrapper[4847]: I0218 00:59:33.105909 4847 scope.go:117] "RemoveContainer" containerID="a8ffb10c1ce865f1d14823461d9725971932b4b00864c9caee4d76a9ab16d82e" Feb 18 00:59:33 crc kubenswrapper[4847]: E0218 00:59:33.405899 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:59:33 crc kubenswrapper[4847]: I0218 00:59:33.548160 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" event={"ID":"3d946c96-5bd1-4a59-b58c-eedf4b3bc460","Type":"ContainerStarted","Data":"3dae776bb1958b663994d3750b9355f8462badbc04714d0623c6f598a7e58a9c"} Feb 18 00:59:33 crc kubenswrapper[4847]: I0218 00:59:33.548320 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" event={"ID":"3d946c96-5bd1-4a59-b58c-eedf4b3bc460","Type":"ContainerStarted","Data":"77be1400128ee61db62b0bfddc128fe468dbe5a39de28bd8682799a91c336571"} Feb 18 00:59:33 crc kubenswrapper[4847]: I0218 00:59:33.577538 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" podStartSLOduration=2.11516659 podStartE2EDuration="2.577516822s" podCreationTimestamp="2026-02-18 00:59:31 +0000 UTC" firstStartedPulling="2026-02-18 00:59:32.673436352 +0000 UTC m=+2046.050787324" lastFinishedPulling="2026-02-18 00:59:33.135786614 +0000 UTC m=+2046.513137556" observedRunningTime="2026-02-18 00:59:33.567398569 +0000 UTC m=+2046.944749501" watchObservedRunningTime="2026-02-18 00:59:33.577516822 +0000 UTC m=+2046.954867764" Feb 18 00:59:37 crc kubenswrapper[4847]: I0218 00:59:37.323543 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:37 crc kubenswrapper[4847]: I0218 00:59:37.324860 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:37 crc kubenswrapper[4847]: I0218 00:59:37.429016 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:37 crc kubenswrapper[4847]: I0218 00:59:37.674129 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:37 crc kubenswrapper[4847]: I0218 00:59:37.718099 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:38 crc kubenswrapper[4847]: I0218 00:59:38.057999 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xx9bl"] Feb 18 00:59:38 crc kubenswrapper[4847]: I0218 00:59:38.072177 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xx9bl"] Feb 18 00:59:38 crc kubenswrapper[4847]: E0218 00:59:38.413983 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:59:39 crc kubenswrapper[4847]: I0218 00:59:39.040936 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pdsmh"] Feb 18 00:59:39 crc kubenswrapper[4847]: I0218 00:59:39.056928 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pdsmh"] Feb 18 00:59:39 crc kubenswrapper[4847]: I0218 00:59:39.428252 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2217ced3-3917-43e6-8c1b-23c7184f4591" path="/var/lib/kubelet/pods/2217ced3-3917-43e6-8c1b-23c7184f4591/volumes" Feb 18 00:59:39 crc kubenswrapper[4847]: I0218 00:59:39.429589 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defca7ab-d2c8-4c4a-910f-06bebeba7b81" path="/var/lib/kubelet/pods/defca7ab-d2c8-4c4a-910f-06bebeba7b81/volumes" Feb 18 00:59:39 crc kubenswrapper[4847]: I0218 00:59:39.648636 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tgd9" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="registry-server" containerID="cri-o://f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052" gracePeriod=2 Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.047715 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fqndf"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.067522 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5ce0-account-create-update-5qbbm"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.082769 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c4f1-account-create-update-qvp96"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.094710 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c4f1-account-create-update-qvp96"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.104206 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-92b2-account-create-update-7v9rd"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.117249 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5ce0-account-create-update-5qbbm"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.121955 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fqndf"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.131863 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-92b2-account-create-update-7v9rd"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.175359 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.268624 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zcg\" (UniqueName: \"kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg\") pod \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.268906 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content\") pod \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.269059 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities\") pod \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\" (UID: \"ec9c4d77-538d-4f0a-ad1c-c725e2f66209\") " Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.270212 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities" (OuterVolumeSpecName: "utilities") pod "ec9c4d77-538d-4f0a-ad1c-c725e2f66209" (UID: "ec9c4d77-538d-4f0a-ad1c-c725e2f66209"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.277879 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg" (OuterVolumeSpecName: "kube-api-access-x4zcg") pod "ec9c4d77-538d-4f0a-ad1c-c725e2f66209" (UID: "ec9c4d77-538d-4f0a-ad1c-c725e2f66209"). InnerVolumeSpecName "kube-api-access-x4zcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.371277 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.371314 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zcg\" (UniqueName: \"kubernetes.io/projected/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-kube-api-access-x4zcg\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.396555 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec9c4d77-538d-4f0a-ad1c-c725e2f66209" (UID: "ec9c4d77-538d-4f0a-ad1c-c725e2f66209"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.474715 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec9c4d77-538d-4f0a-ad1c-c725e2f66209-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.664652 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerID="f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052" exitCode=0 Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.664717 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerDied","Data":"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052"} Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.664761 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tgd9" event={"ID":"ec9c4d77-538d-4f0a-ad1c-c725e2f66209","Type":"ContainerDied","Data":"5c85c3df5a46f47208e5537e7b6bb3b9edc3a4fd23ae72b3b93d47b4298886de"} Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.664796 4847 scope.go:117] "RemoveContainer" containerID="f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.665043 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tgd9" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.744039 4847 scope.go:117] "RemoveContainer" containerID="44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.761472 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.772867 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tgd9"] Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.791009 4847 scope.go:117] "RemoveContainer" containerID="1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.838489 4847 scope.go:117] "RemoveContainer" containerID="f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052" Feb 18 00:59:40 crc kubenswrapper[4847]: E0218 00:59:40.839308 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052\": container with ID starting with f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052 not found: ID does not exist" containerID="f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.839382 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052"} err="failed to get container status \"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052\": rpc error: code = NotFound desc = could not find container \"f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052\": container with ID starting with f5f66ded2dbd46e223d0d39dd8b53df65b9fa0866ac0934897d0f5d40b521052 not found: ID does not exist" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.839428 4847 scope.go:117] "RemoveContainer" containerID="44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e" Feb 18 00:59:40 crc kubenswrapper[4847]: E0218 00:59:40.840009 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e\": container with ID starting with 44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e not found: ID does not exist" containerID="44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.840072 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e"} err="failed to get container status \"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e\": rpc error: code = NotFound desc = could not find container \"44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e\": container with ID starting with 44ab6273b221da29a1ccf8e96df7da1a2ba8485dafba545f9813c89c50e33e8e not found: ID does not exist" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.840119 4847 scope.go:117] "RemoveContainer" containerID="1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068" Feb 18 00:59:40 crc kubenswrapper[4847]: E0218 00:59:40.840490 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068\": container with ID starting with 1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068 not found: ID does not exist" containerID="1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068" Feb 18 00:59:40 crc kubenswrapper[4847]: I0218 00:59:40.840526 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068"} err="failed to get container status \"1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068\": rpc error: code = NotFound desc = could not find container \"1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068\": container with ID starting with 1bbb1edde4bae1bf12f4d28a39a2098b84438072f5870a7750644f537024e068 not found: ID does not exist" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.429260 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5e6ecd-a8a5-4722-8195-aa62753ce56f" path="/var/lib/kubelet/pods/0b5e6ecd-a8a5-4722-8195-aa62753ce56f/volumes" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.430639 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3caed2d8-3c83-45dd-946b-b4765bb99f58" path="/var/lib/kubelet/pods/3caed2d8-3c83-45dd-946b-b4765bb99f58/volumes" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.431544 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec0cbce-5157-43e7-9aba-7973b14170ce" path="/var/lib/kubelet/pods/3ec0cbce-5157-43e7-9aba-7973b14170ce/volumes" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.432520 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9808ca0d-1d9f-4692-ae85-975a6ca3822f" path="/var/lib/kubelet/pods/9808ca0d-1d9f-4692-ae85-975a6ca3822f/volumes" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.433947 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" path="/var/lib/kubelet/pods/ec9c4d77-538d-4f0a-ad1c-c725e2f66209/volumes" Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.681152 4847 generic.go:334] "Generic (PLEG): container finished" podID="3d946c96-5bd1-4a59-b58c-eedf4b3bc460" containerID="3dae776bb1958b663994d3750b9355f8462badbc04714d0623c6f598a7e58a9c" exitCode=0 Feb 18 00:59:41 crc kubenswrapper[4847]: I0218 00:59:41.681234 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" event={"ID":"3d946c96-5bd1-4a59-b58c-eedf4b3bc460","Type":"ContainerDied","Data":"3dae776bb1958b663994d3750b9355f8462badbc04714d0623c6f598a7e58a9c"} Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.194848 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.253183 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory\") pod \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.253254 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam\") pod \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.253439 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29h5n\" (UniqueName: \"kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n\") pod \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\" (UID: \"3d946c96-5bd1-4a59-b58c-eedf4b3bc460\") " Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.261481 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n" (OuterVolumeSpecName: "kube-api-access-29h5n") pod "3d946c96-5bd1-4a59-b58c-eedf4b3bc460" (UID: "3d946c96-5bd1-4a59-b58c-eedf4b3bc460"). InnerVolumeSpecName "kube-api-access-29h5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.292397 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory" (OuterVolumeSpecName: "inventory") pod "3d946c96-5bd1-4a59-b58c-eedf4b3bc460" (UID: "3d946c96-5bd1-4a59-b58c-eedf4b3bc460"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.299722 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d946c96-5bd1-4a59-b58c-eedf4b3bc460" (UID: "3d946c96-5bd1-4a59-b58c-eedf4b3bc460"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.356436 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.356484 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.356498 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29h5n\" (UniqueName: \"kubernetes.io/projected/3d946c96-5bd1-4a59-b58c-eedf4b3bc460-kube-api-access-29h5n\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.712202 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" event={"ID":"3d946c96-5bd1-4a59-b58c-eedf4b3bc460","Type":"ContainerDied","Data":"77be1400128ee61db62b0bfddc128fe468dbe5a39de28bd8682799a91c336571"} Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.712307 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8bqxp" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.712314 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77be1400128ee61db62b0bfddc128fe468dbe5a39de28bd8682799a91c336571" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.837172 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm"] Feb 18 00:59:43 crc kubenswrapper[4847]: E0218 00:59:43.837710 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="extract-content" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.837730 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="extract-content" Feb 18 00:59:43 crc kubenswrapper[4847]: E0218 00:59:43.837751 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="extract-utilities" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.837761 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="extract-utilities" Feb 18 00:59:43 crc kubenswrapper[4847]: E0218 00:59:43.837784 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d946c96-5bd1-4a59-b58c-eedf4b3bc460" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.837795 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d946c96-5bd1-4a59-b58c-eedf4b3bc460" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:43 crc kubenswrapper[4847]: E0218 00:59:43.837820 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="registry-server" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.837828 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="registry-server" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.838078 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d946c96-5bd1-4a59-b58c-eedf4b3bc460" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.838110 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9c4d77-538d-4f0a-ad1c-c725e2f66209" containerName="registry-server" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.840470 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.844107 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.844190 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.844568 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.846786 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.856911 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm"] Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.866409 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.866841 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbbvr\" (UniqueName: \"kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.867082 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.969686 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.969826 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbbvr\" (UniqueName: \"kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.969960 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.975084 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.975403 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:43 crc kubenswrapper[4847]: I0218 00:59:43.993764 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbbvr\" (UniqueName: \"kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:44 crc kubenswrapper[4847]: I0218 00:59:44.165367 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:44 crc kubenswrapper[4847]: I0218 00:59:44.759586 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm"] Feb 18 00:59:44 crc kubenswrapper[4847]: W0218 00:59:44.766134 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf33724ce_fdec_4a31_8d15_f39244f2392e.slice/crio-37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2 WatchSource:0}: Error finding container 37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2: Status 404 returned error can't find the container with id 37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2 Feb 18 00:59:45 crc kubenswrapper[4847]: E0218 00:59:45.407042 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 00:59:45 crc kubenswrapper[4847]: I0218 00:59:45.737991 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" event={"ID":"f33724ce-fdec-4a31-8d15-f39244f2392e","Type":"ContainerStarted","Data":"6884431e985ed64b82005bc4e6a364b56acb5dd7230da66bdcae82fcdf06c41e"} Feb 18 00:59:45 crc kubenswrapper[4847]: I0218 00:59:45.738313 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" event={"ID":"f33724ce-fdec-4a31-8d15-f39244f2392e","Type":"ContainerStarted","Data":"37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2"} Feb 18 00:59:45 crc kubenswrapper[4847]: I0218 00:59:45.757169 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" podStartSLOduration=2.2587284690000002 podStartE2EDuration="2.757149605s" podCreationTimestamp="2026-02-18 00:59:43 +0000 UTC" firstStartedPulling="2026-02-18 00:59:44.769375059 +0000 UTC m=+2058.146726001" lastFinishedPulling="2026-02-18 00:59:45.267796185 +0000 UTC m=+2058.645147137" observedRunningTime="2026-02-18 00:59:45.752045673 +0000 UTC m=+2059.129396615" watchObservedRunningTime="2026-02-18 00:59:45.757149605 +0000 UTC m=+2059.134500547" Feb 18 00:59:50 crc kubenswrapper[4847]: E0218 00:59:50.406671 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 00:59:55 crc kubenswrapper[4847]: I0218 00:59:55.861168 4847 generic.go:334] "Generic (PLEG): container finished" podID="f33724ce-fdec-4a31-8d15-f39244f2392e" containerID="6884431e985ed64b82005bc4e6a364b56acb5dd7230da66bdcae82fcdf06c41e" exitCode=0 Feb 18 00:59:55 crc kubenswrapper[4847]: I0218 00:59:55.861299 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" event={"ID":"f33724ce-fdec-4a31-8d15-f39244f2392e","Type":"ContainerDied","Data":"6884431e985ed64b82005bc4e6a364b56acb5dd7230da66bdcae82fcdf06c41e"} Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.481728 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.664824 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbbvr\" (UniqueName: \"kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr\") pod \"f33724ce-fdec-4a31-8d15-f39244f2392e\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.665030 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam\") pod \"f33724ce-fdec-4a31-8d15-f39244f2392e\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.665157 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory\") pod \"f33724ce-fdec-4a31-8d15-f39244f2392e\" (UID: \"f33724ce-fdec-4a31-8d15-f39244f2392e\") " Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.672906 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr" (OuterVolumeSpecName: "kube-api-access-nbbvr") pod "f33724ce-fdec-4a31-8d15-f39244f2392e" (UID: "f33724ce-fdec-4a31-8d15-f39244f2392e"). InnerVolumeSpecName "kube-api-access-nbbvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.695901 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory" (OuterVolumeSpecName: "inventory") pod "f33724ce-fdec-4a31-8d15-f39244f2392e" (UID: "f33724ce-fdec-4a31-8d15-f39244f2392e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.719370 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f33724ce-fdec-4a31-8d15-f39244f2392e" (UID: "f33724ce-fdec-4a31-8d15-f39244f2392e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.768891 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbbvr\" (UniqueName: \"kubernetes.io/projected/f33724ce-fdec-4a31-8d15-f39244f2392e-kube-api-access-nbbvr\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.768983 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.769004 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f33724ce-fdec-4a31-8d15-f39244f2392e-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.917878 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" event={"ID":"f33724ce-fdec-4a31-8d15-f39244f2392e","Type":"ContainerDied","Data":"37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2"} Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.917924 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37ff4d6075647e23b3b4196eaf2a80c59a380aaf3c90cdb5b1b0bba002e7dbd2" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.918015 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.993734 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v"] Feb 18 00:59:57 crc kubenswrapper[4847]: E0218 00:59:57.994278 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33724ce-fdec-4a31-8d15-f39244f2392e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.994295 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33724ce-fdec-4a31-8d15-f39244f2392e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.994497 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33724ce-fdec-4a31-8d15-f39244f2392e" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 18 00:59:57 crc kubenswrapper[4847]: I0218 00:59:57.995347 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.001771 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.004037 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.004261 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.004393 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.004516 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.005585 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v"] Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.006786 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.007300 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.007435 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.074789 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.074882 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.074919 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47w99\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.074941 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075093 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075163 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075191 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075227 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075256 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075283 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075307 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075333 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.075394 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.176802 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47w99\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.176857 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.176957 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177001 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177026 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177057 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177075 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177098 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177118 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177150 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177191 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177224 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.177282 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.183176 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.183192 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.183451 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.183819 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.184713 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.185014 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.185405 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.185769 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.186202 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.186470 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.187351 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.188720 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.199346 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47w99\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-df98v\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.318935 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 00:59:58 crc kubenswrapper[4847]: I0218 00:59:58.994268 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v"] Feb 18 00:59:59 crc kubenswrapper[4847]: I0218 00:59:59.951670 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" event={"ID":"2796ae55-da7e-484c-a3fa-789aabef230d","Type":"ContainerStarted","Data":"8af4b02594483eec05bed43a2f0bca49c2d4ded39efe941f312774c982a37325"} Feb 18 00:59:59 crc kubenswrapper[4847]: I0218 00:59:59.952152 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" event={"ID":"2796ae55-da7e-484c-a3fa-789aabef230d","Type":"ContainerStarted","Data":"bac5ca903a047e83191f11fa58c7669973e5fab226e90820d40b999be096d340"} Feb 18 00:59:59 crc kubenswrapper[4847]: I0218 00:59:59.987322 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" podStartSLOduration=2.549539243 podStartE2EDuration="2.987297286s" podCreationTimestamp="2026-02-18 00:59:57 +0000 UTC" firstStartedPulling="2026-02-18 00:59:59.003918695 +0000 UTC m=+2072.381269647" lastFinishedPulling="2026-02-18 00:59:59.441676728 +0000 UTC m=+2072.819027690" observedRunningTime="2026-02-18 00:59:59.981818725 +0000 UTC m=+2073.359169687" watchObservedRunningTime="2026-02-18 00:59:59.987297286 +0000 UTC m=+2073.364648238" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.163701 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs"] Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.166324 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.170594 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.170881 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.190097 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs"] Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.326716 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmzz\" (UniqueName: \"kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.326834 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.326865 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: E0218 01:00:00.407013 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.428172 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmzz\" (UniqueName: \"kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.428252 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.428276 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.429952 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.441261 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.445645 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmzz\" (UniqueName: \"kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz\") pod \"collect-profiles-29522940-k8hqs\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:00 crc kubenswrapper[4847]: I0218 01:00:00.507530 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:01 crc kubenswrapper[4847]: I0218 01:00:01.004196 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs"] Feb 18 01:00:01 crc kubenswrapper[4847]: I0218 01:00:01.978979 4847 generic.go:334] "Generic (PLEG): container finished" podID="c51f4019-3d36-45e9-a342-72e8b4ef9745" containerID="014b5ea421bfe6087923e1eb2f1b5498b3f427fab6627869a622130da144680f" exitCode=0 Feb 18 01:00:01 crc kubenswrapper[4847]: I0218 01:00:01.979156 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" event={"ID":"c51f4019-3d36-45e9-a342-72e8b4ef9745","Type":"ContainerDied","Data":"014b5ea421bfe6087923e1eb2f1b5498b3f427fab6627869a622130da144680f"} Feb 18 01:00:01 crc kubenswrapper[4847]: I0218 01:00:01.979386 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" event={"ID":"c51f4019-3d36-45e9-a342-72e8b4ef9745","Type":"ContainerStarted","Data":"9d644b5a4d3f632dc8a3d2a44fbc41f1b90129c07d77cfd0666a5ab83897ca7f"} Feb 18 01:00:03 crc kubenswrapper[4847]: E0218 01:00:03.422298 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.469209 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.620744 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume\") pod \"c51f4019-3d36-45e9-a342-72e8b4ef9745\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.621356 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume\") pod \"c51f4019-3d36-45e9-a342-72e8b4ef9745\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.621393 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gmzz\" (UniqueName: \"kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz\") pod \"c51f4019-3d36-45e9-a342-72e8b4ef9745\" (UID: \"c51f4019-3d36-45e9-a342-72e8b4ef9745\") " Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.622443 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume" (OuterVolumeSpecName: "config-volume") pod "c51f4019-3d36-45e9-a342-72e8b4ef9745" (UID: "c51f4019-3d36-45e9-a342-72e8b4ef9745"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.626721 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c51f4019-3d36-45e9-a342-72e8b4ef9745" (UID: "c51f4019-3d36-45e9-a342-72e8b4ef9745"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.627788 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz" (OuterVolumeSpecName: "kube-api-access-2gmzz") pod "c51f4019-3d36-45e9-a342-72e8b4ef9745" (UID: "c51f4019-3d36-45e9-a342-72e8b4ef9745"). InnerVolumeSpecName "kube-api-access-2gmzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.724159 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c51f4019-3d36-45e9-a342-72e8b4ef9745-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.724207 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c51f4019-3d36-45e9-a342-72e8b4ef9745-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:03 crc kubenswrapper[4847]: I0218 01:00:03.724221 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gmzz\" (UniqueName: \"kubernetes.io/projected/c51f4019-3d36-45e9-a342-72e8b4ef9745-kube-api-access-2gmzz\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:04 crc kubenswrapper[4847]: I0218 01:00:04.003364 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" event={"ID":"c51f4019-3d36-45e9-a342-72e8b4ef9745","Type":"ContainerDied","Data":"9d644b5a4d3f632dc8a3d2a44fbc41f1b90129c07d77cfd0666a5ab83897ca7f"} Feb 18 01:00:04 crc kubenswrapper[4847]: I0218 01:00:04.003420 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d644b5a4d3f632dc8a3d2a44fbc41f1b90129c07d77cfd0666a5ab83897ca7f" Feb 18 01:00:04 crc kubenswrapper[4847]: I0218 01:00:04.003460 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs" Feb 18 01:00:04 crc kubenswrapper[4847]: I0218 01:00:04.584704 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn"] Feb 18 01:00:04 crc kubenswrapper[4847]: I0218 01:00:04.597525 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-kpcdn"] Feb 18 01:00:05 crc kubenswrapper[4847]: I0218 01:00:05.433099 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b73a0e0b-a65a-4985-b23e-40e2334a47e3" path="/var/lib/kubelet/pods/b73a0e0b-a65a-4985-b23e-40e2334a47e3/volumes" Feb 18 01:00:10 crc kubenswrapper[4847]: I0218 01:00:10.039085 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-69tbz"] Feb 18 01:00:10 crc kubenswrapper[4847]: I0218 01:00:10.052121 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-69tbz"] Feb 18 01:00:11 crc kubenswrapper[4847]: I0218 01:00:11.424587 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb3804b-7097-4c08-9db9-8b08a71eb896" path="/var/lib/kubelet/pods/ceb3804b-7097-4c08-9db9-8b08a71eb896/volumes" Feb 18 01:00:15 crc kubenswrapper[4847]: E0218 01:00:15.408230 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:00:17 crc kubenswrapper[4847]: E0218 01:00:17.415971 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:00:28 crc kubenswrapper[4847]: E0218 01:00:28.407877 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:00:28 crc kubenswrapper[4847]: E0218 01:00:28.407869 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:00:30 crc kubenswrapper[4847]: I0218 01:00:30.050692 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4c4c-account-create-update-mcbxg"] Feb 18 01:00:30 crc kubenswrapper[4847]: I0218 01:00:30.059178 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-sbsm7"] Feb 18 01:00:30 crc kubenswrapper[4847]: I0218 01:00:30.067493 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4c4c-account-create-update-mcbxg"] Feb 18 01:00:30 crc kubenswrapper[4847]: I0218 01:00:30.077515 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-sbsm7"] Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.043391 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6bzdl"] Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.055188 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-vpbzx"] Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.065634 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-vpbzx"] Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.074182 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-6bzdl"] Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.417711 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9912ea-2aab-435b-b8fc-d418d07085ce" path="/var/lib/kubelet/pods/3d9912ea-2aab-435b-b8fc-d418d07085ce/volumes" Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.418972 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="701b730c-8421-410f-a849-24f8a092e781" path="/var/lib/kubelet/pods/701b730c-8421-410f-a849-24f8a092e781/volumes" Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.420187 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699" path="/var/lib/kubelet/pods/bc8cdaa5-f9b7-4c90-ab87-8c5b0262f699/volumes" Feb 18 01:00:31 crc kubenswrapper[4847]: I0218 01:00:31.421389 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41b6174-4c4e-48d6-b094-0af6d3781553" path="/var/lib/kubelet/pods/c41b6174-4c4e-48d6-b094-0af6d3781553/volumes" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.366338 4847 scope.go:117] "RemoveContainer" containerID="19d89bfa0587c78c3bf032b2abb5fbaf0deba53e413e48ed44e4cdcf0e342593" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.395933 4847 scope.go:117] "RemoveContainer" containerID="e9e0064a6dccfbf9dea70b23131ca52130b41e267edeac7fc5a6399ef999c370" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.471022 4847 scope.go:117] "RemoveContainer" containerID="68471a24a5b96a1956e52782b843f252bf133dfc56419b56793c73415b50f783" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.522216 4847 scope.go:117] "RemoveContainer" containerID="966c5437769cfb517be9b685d1cac9e5d886c0e82aed26df78e085789f6f123c" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.573395 4847 scope.go:117] "RemoveContainer" containerID="fa1c63c3075a1f894f94a3315b6b537ea1c25aabee71edc000ce5baa1aa47a48" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.613380 4847 scope.go:117] "RemoveContainer" containerID="2bf67bf8504506f4301446e987e9da6642153f44c026a1c70b6e5f265abb2a36" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.646647 4847 scope.go:117] "RemoveContainer" containerID="f014f3026c432a472e7ca049c99f28171b61b3457e422bd94f2e45b059f3f8da" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.685836 4847 scope.go:117] "RemoveContainer" containerID="1851255816afd251bc7e544dbc1c8ca3be8d3d1e314706a172dccd60c97909d9" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.722692 4847 scope.go:117] "RemoveContainer" containerID="a7b09064f64997187a8f355a0de5b41e789a08a3729f585b59f264e17ef2f8aa" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.751482 4847 scope.go:117] "RemoveContainer" containerID="ea870ac0bd7b79a0d8414450976e8563df25edd64f61ba52836c0d013b2c6864" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.788131 4847 scope.go:117] "RemoveContainer" containerID="867c9a3b4ad951a551d08d4df8b1f470196105ae232ff95cfd38e6bd1305ccf0" Feb 18 01:00:33 crc kubenswrapper[4847]: I0218 01:00:33.824444 4847 scope.go:117] "RemoveContainer" containerID="9eb3be4d4b9ccaeaa0d26743ed2210d2abd92e5b8797c776b4e80823b17da279" Feb 18 01:00:39 crc kubenswrapper[4847]: I0218 01:00:39.463552 4847 generic.go:334] "Generic (PLEG): container finished" podID="2796ae55-da7e-484c-a3fa-789aabef230d" containerID="8af4b02594483eec05bed43a2f0bca49c2d4ded39efe941f312774c982a37325" exitCode=0 Feb 18 01:00:39 crc kubenswrapper[4847]: I0218 01:00:39.463948 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" event={"ID":"2796ae55-da7e-484c-a3fa-789aabef230d","Type":"ContainerDied","Data":"8af4b02594483eec05bed43a2f0bca49c2d4ded39efe941f312774c982a37325"} Feb 18 01:00:40 crc kubenswrapper[4847]: E0218 01:00:40.408720 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:00:40 crc kubenswrapper[4847]: E0218 01:00:40.409437 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.035561 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107028 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107146 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107173 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107224 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107345 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107394 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107446 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107510 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107542 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47w99\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107570 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107594 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107654 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.107705 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle\") pod \"2796ae55-da7e-484c-a3fa-789aabef230d\" (UID: \"2796ae55-da7e-484c-a3fa-789aabef230d\") " Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.115066 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.117040 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.117080 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.118586 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.119442 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.122442 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.125127 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99" (OuterVolumeSpecName: "kube-api-access-47w99") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "kube-api-access-47w99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.125374 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.125367 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.126831 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.143873 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.165118 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.191585 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory" (OuterVolumeSpecName: "inventory") pod "2796ae55-da7e-484c-a3fa-789aabef230d" (UID: "2796ae55-da7e-484c-a3fa-789aabef230d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.214556 4847 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.214855 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.214920 4847 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.214979 4847 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215043 4847 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215097 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215154 4847 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215220 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47w99\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-kube-api-access-47w99\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215276 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215335 4847 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/2796ae55-da7e-484c-a3fa-789aabef230d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215390 4847 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215451 4847 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.215508 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2796ae55-da7e-484c-a3fa-789aabef230d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.488487 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" event={"ID":"2796ae55-da7e-484c-a3fa-789aabef230d","Type":"ContainerDied","Data":"bac5ca903a047e83191f11fa58c7669973e5fab226e90820d40b999be096d340"} Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.488798 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac5ca903a047e83191f11fa58c7669973e5fab226e90820d40b999be096d340" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.488562 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-df98v" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.617211 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z"] Feb 18 01:00:41 crc kubenswrapper[4847]: E0218 01:00:41.617760 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51f4019-3d36-45e9-a342-72e8b4ef9745" containerName="collect-profiles" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.617784 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51f4019-3d36-45e9-a342-72e8b4ef9745" containerName="collect-profiles" Feb 18 01:00:41 crc kubenswrapper[4847]: E0218 01:00:41.617836 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2796ae55-da7e-484c-a3fa-789aabef230d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.617847 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2796ae55-da7e-484c-a3fa-789aabef230d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.618085 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2796ae55-da7e-484c-a3fa-789aabef230d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.618118 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51f4019-3d36-45e9-a342-72e8b4ef9745" containerName="collect-profiles" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.629112 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.632360 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z"] Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.642695 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.642844 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.643033 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.643150 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.643295 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.735805 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.736115 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.736209 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.736418 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rsx\" (UniqueName: \"kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.736497 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.838677 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.839109 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.839135 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.839188 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rsx\" (UniqueName: \"kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.839216 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.840283 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.846663 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.847443 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.847976 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.864741 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rsx\" (UniqueName: \"kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d9z9z\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:41 crc kubenswrapper[4847]: I0218 01:00:41.962584 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:00:42 crc kubenswrapper[4847]: I0218 01:00:42.044925 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-pzwv2"] Feb 18 01:00:42 crc kubenswrapper[4847]: I0218 01:00:42.057772 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-pzwv2"] Feb 18 01:00:42 crc kubenswrapper[4847]: I0218 01:00:42.553829 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z"] Feb 18 01:00:43 crc kubenswrapper[4847]: I0218 01:00:43.423193 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa59fc3a-ea9b-45bb-a190-1844834093e9" path="/var/lib/kubelet/pods/fa59fc3a-ea9b-45bb-a190-1844834093e9/volumes" Feb 18 01:00:43 crc kubenswrapper[4847]: I0218 01:00:43.507009 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" event={"ID":"0429dd21-328a-4aed-9e67-f008635b6127","Type":"ContainerStarted","Data":"2f0d10efd26721ecc15ad604e415a88774b1f7d697c53fe75e15b8b1c240a72d"} Feb 18 01:00:43 crc kubenswrapper[4847]: I0218 01:00:43.507087 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" event={"ID":"0429dd21-328a-4aed-9e67-f008635b6127","Type":"ContainerStarted","Data":"ef4cc5c61cd6065e63bcdb4f259627f2fc1e745557ea711e5ae2dfc965be3d9d"} Feb 18 01:00:43 crc kubenswrapper[4847]: I0218 01:00:43.530561 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" podStartSLOduration=2.019090402 podStartE2EDuration="2.530538361s" podCreationTimestamp="2026-02-18 01:00:41 +0000 UTC" firstStartedPulling="2026-02-18 01:00:42.559056286 +0000 UTC m=+2115.936407238" lastFinishedPulling="2026-02-18 01:00:43.070504235 +0000 UTC m=+2116.447855197" observedRunningTime="2026-02-18 01:00:43.521219148 +0000 UTC m=+2116.898570090" watchObservedRunningTime="2026-02-18 01:00:43.530538361 +0000 UTC m=+2116.907889303" Feb 18 01:00:51 crc kubenswrapper[4847]: E0218 01:00:51.408755 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:00:51 crc kubenswrapper[4847]: E0218 01:00:51.408814 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:00:53 crc kubenswrapper[4847]: I0218 01:00:53.491982 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:00:53 crc kubenswrapper[4847]: I0218 01:00:53.492329 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.150413 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522941-rppqb"] Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.153127 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.164847 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522941-rppqb"] Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.208703 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.208833 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxfp\" (UniqueName: \"kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.208875 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.209155 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.313519 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.313601 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.313766 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxfp\" (UniqueName: \"kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.313805 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.320567 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.322717 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.334143 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.344381 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxfp\" (UniqueName: \"kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp\") pod \"keystone-cron-29522941-rppqb\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:00 crc kubenswrapper[4847]: I0218 01:01:00.488032 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:01 crc kubenswrapper[4847]: I0218 01:01:01.027899 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522941-rppqb"] Feb 18 01:01:01 crc kubenswrapper[4847]: I0218 01:01:01.735130 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-rppqb" event={"ID":"55b8d659-c976-4095-baab-c6452d321fe2","Type":"ContainerStarted","Data":"828dc2a48898582fe6b91d081ed1e5a391773d643e0e820f71b1e9498fd18789"} Feb 18 01:01:01 crc kubenswrapper[4847]: I0218 01:01:01.735396 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-rppqb" event={"ID":"55b8d659-c976-4095-baab-c6452d321fe2","Type":"ContainerStarted","Data":"3548f8846a20dc22e0f65fc246bbecab7f89ccfff7b1cfbaadab29548c4f47e8"} Feb 18 01:01:01 crc kubenswrapper[4847]: I0218 01:01:01.755562 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522941-rppqb" podStartSLOduration=1.755542185 podStartE2EDuration="1.755542185s" podCreationTimestamp="2026-02-18 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 01:01:01.749110341 +0000 UTC m=+2135.126461283" watchObservedRunningTime="2026-02-18 01:01:01.755542185 +0000 UTC m=+2135.132893127" Feb 18 01:01:03 crc kubenswrapper[4847]: E0218 01:01:03.406981 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:01:03 crc kubenswrapper[4847]: I0218 01:01:03.770704 4847 generic.go:334] "Generic (PLEG): container finished" podID="55b8d659-c976-4095-baab-c6452d321fe2" containerID="828dc2a48898582fe6b91d081ed1e5a391773d643e0e820f71b1e9498fd18789" exitCode=0 Feb 18 01:01:03 crc kubenswrapper[4847]: I0218 01:01:03.770789 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-rppqb" event={"ID":"55b8d659-c976-4095-baab-c6452d321fe2","Type":"ContainerDied","Data":"828dc2a48898582fe6b91d081ed1e5a391773d643e0e820f71b1e9498fd18789"} Feb 18 01:01:04 crc kubenswrapper[4847]: E0218 01:01:04.408954 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.184644 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.236753 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys\") pod \"55b8d659-c976-4095-baab-c6452d321fe2\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.238013 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data\") pod \"55b8d659-c976-4095-baab-c6452d321fe2\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.238477 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle\") pod \"55b8d659-c976-4095-baab-c6452d321fe2\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.239150 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxfp\" (UniqueName: \"kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp\") pod \"55b8d659-c976-4095-baab-c6452d321fe2\" (UID: \"55b8d659-c976-4095-baab-c6452d321fe2\") " Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.282352 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp" (OuterVolumeSpecName: "kube-api-access-7gxfp") pod "55b8d659-c976-4095-baab-c6452d321fe2" (UID: "55b8d659-c976-4095-baab-c6452d321fe2"). InnerVolumeSpecName "kube-api-access-7gxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.283906 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "55b8d659-c976-4095-baab-c6452d321fe2" (UID: "55b8d659-c976-4095-baab-c6452d321fe2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.319990 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data" (OuterVolumeSpecName: "config-data") pod "55b8d659-c976-4095-baab-c6452d321fe2" (UID: "55b8d659-c976-4095-baab-c6452d321fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.324554 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55b8d659-c976-4095-baab-c6452d321fe2" (UID: "55b8d659-c976-4095-baab-c6452d321fe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.368342 4847 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.368410 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.368438 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b8d659-c976-4095-baab-c6452d321fe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.368469 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxfp\" (UniqueName: \"kubernetes.io/projected/55b8d659-c976-4095-baab-c6452d321fe2-kube-api-access-7gxfp\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.791424 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522941-rppqb" event={"ID":"55b8d659-c976-4095-baab-c6452d321fe2","Type":"ContainerDied","Data":"3548f8846a20dc22e0f65fc246bbecab7f89ccfff7b1cfbaadab29548c4f47e8"} Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.791469 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3548f8846a20dc22e0f65fc246bbecab7f89ccfff7b1cfbaadab29548c4f47e8" Feb 18 01:01:05 crc kubenswrapper[4847]: I0218 01:01:05.791489 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522941-rppqb" Feb 18 01:01:15 crc kubenswrapper[4847]: I0218 01:01:15.069702 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7jkqm"] Feb 18 01:01:15 crc kubenswrapper[4847]: I0218 01:01:15.083248 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7jkqm"] Feb 18 01:01:15 crc kubenswrapper[4847]: I0218 01:01:15.419450 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab094cba-aca4-4ea7-a5a9-b13d4ac35263" path="/var/lib/kubelet/pods/ab094cba-aca4-4ea7-a5a9-b13d4ac35263/volumes" Feb 18 01:01:16 crc kubenswrapper[4847]: E0218 01:01:16.408544 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:01:16 crc kubenswrapper[4847]: E0218 01:01:16.408569 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:01:23 crc kubenswrapper[4847]: I0218 01:01:23.492395 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:01:23 crc kubenswrapper[4847]: I0218 01:01:23.493103 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:01:27 crc kubenswrapper[4847]: E0218 01:01:27.414573 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:01:28 crc kubenswrapper[4847]: E0218 01:01:28.408327 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:01:34 crc kubenswrapper[4847]: I0218 01:01:34.135030 4847 scope.go:117] "RemoveContainer" containerID="a327da8e984da18ce271e2a004d1ff5af75dab0a1caccb5ab62599cf9859d244" Feb 18 01:01:34 crc kubenswrapper[4847]: I0218 01:01:34.208863 4847 scope.go:117] "RemoveContainer" containerID="42d62458a85e69b80e6ca971c691a9e4ea5105d3707936cf4ef10043759fb314" Feb 18 01:01:40 crc kubenswrapper[4847]: E0218 01:01:40.407296 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:01:41 crc kubenswrapper[4847]: E0218 01:01:41.406213 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:01:53 crc kubenswrapper[4847]: I0218 01:01:53.492328 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:01:53 crc kubenswrapper[4847]: I0218 01:01:53.493085 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:01:53 crc kubenswrapper[4847]: I0218 01:01:53.493153 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:01:53 crc kubenswrapper[4847]: I0218 01:01:53.494246 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:01:53 crc kubenswrapper[4847]: I0218 01:01:53.494350 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" gracePeriod=600 Feb 18 01:01:53 crc kubenswrapper[4847]: E0218 01:01:53.628224 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:01:54 crc kubenswrapper[4847]: I0218 01:01:54.375650 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" exitCode=0 Feb 18 01:01:54 crc kubenswrapper[4847]: I0218 01:01:54.375724 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970"} Feb 18 01:01:54 crc kubenswrapper[4847]: I0218 01:01:54.375789 4847 scope.go:117] "RemoveContainer" containerID="23f3a796a2412e9ab1c0e2914b0f2abb3867d28ef0847371c851e8c2e11a6769" Feb 18 01:01:54 crc kubenswrapper[4847]: I0218 01:01:54.376716 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:01:54 crc kubenswrapper[4847]: E0218 01:01:54.377243 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:01:54 crc kubenswrapper[4847]: E0218 01:01:54.418869 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:01:56 crc kubenswrapper[4847]: E0218 01:01:56.406925 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:01:56 crc kubenswrapper[4847]: E0218 01:01:56.731659 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0429dd21_328a_4aed_9e67_f008635b6127.slice/crio-conmon-2f0d10efd26721ecc15ad604e415a88774b1f7d697c53fe75e15b8b1c240a72d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0429dd21_328a_4aed_9e67_f008635b6127.slice/crio-2f0d10efd26721ecc15ad604e415a88774b1f7d697c53fe75e15b8b1c240a72d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 01:01:57 crc kubenswrapper[4847]: I0218 01:01:57.419795 4847 generic.go:334] "Generic (PLEG): container finished" podID="0429dd21-328a-4aed-9e67-f008635b6127" containerID="2f0d10efd26721ecc15ad604e415a88774b1f7d697c53fe75e15b8b1c240a72d" exitCode=0 Feb 18 01:01:57 crc kubenswrapper[4847]: I0218 01:01:57.421931 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" event={"ID":"0429dd21-328a-4aed-9e67-f008635b6127","Type":"ContainerDied","Data":"2f0d10efd26721ecc15ad604e415a88774b1f7d697c53fe75e15b8b1c240a72d"} Feb 18 01:01:58 crc kubenswrapper[4847]: I0218 01:01:58.954787 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.026413 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory\") pod \"0429dd21-328a-4aed-9e67-f008635b6127\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.026953 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rsx\" (UniqueName: \"kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx\") pod \"0429dd21-328a-4aed-9e67-f008635b6127\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.027000 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0\") pod \"0429dd21-328a-4aed-9e67-f008635b6127\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.027228 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam\") pod \"0429dd21-328a-4aed-9e67-f008635b6127\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.027289 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle\") pod \"0429dd21-328a-4aed-9e67-f008635b6127\" (UID: \"0429dd21-328a-4aed-9e67-f008635b6127\") " Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.035929 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx" (OuterVolumeSpecName: "kube-api-access-p8rsx") pod "0429dd21-328a-4aed-9e67-f008635b6127" (UID: "0429dd21-328a-4aed-9e67-f008635b6127"). InnerVolumeSpecName "kube-api-access-p8rsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.038571 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0429dd21-328a-4aed-9e67-f008635b6127" (UID: "0429dd21-328a-4aed-9e67-f008635b6127"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.072178 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0429dd21-328a-4aed-9e67-f008635b6127" (UID: "0429dd21-328a-4aed-9e67-f008635b6127"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.074029 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0429dd21-328a-4aed-9e67-f008635b6127" (UID: "0429dd21-328a-4aed-9e67-f008635b6127"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.084950 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory" (OuterVolumeSpecName: "inventory") pod "0429dd21-328a-4aed-9e67-f008635b6127" (UID: "0429dd21-328a-4aed-9e67-f008635b6127"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.130506 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.130538 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rsx\" (UniqueName: \"kubernetes.io/projected/0429dd21-328a-4aed-9e67-f008635b6127-kube-api-access-p8rsx\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.130547 4847 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0429dd21-328a-4aed-9e67-f008635b6127-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.130557 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.130567 4847 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0429dd21-328a-4aed-9e67-f008635b6127-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.443623 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" event={"ID":"0429dd21-328a-4aed-9e67-f008635b6127","Type":"ContainerDied","Data":"ef4cc5c61cd6065e63bcdb4f259627f2fc1e745557ea711e5ae2dfc965be3d9d"} Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.443682 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef4cc5c61cd6065e63bcdb4f259627f2fc1e745557ea711e5ae2dfc965be3d9d" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.443680 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d9z9z" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.590683 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f"] Feb 18 01:01:59 crc kubenswrapper[4847]: E0218 01:01:59.591195 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b8d659-c976-4095-baab-c6452d321fe2" containerName="keystone-cron" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.591212 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b8d659-c976-4095-baab-c6452d321fe2" containerName="keystone-cron" Feb 18 01:01:59 crc kubenswrapper[4847]: E0218 01:01:59.591222 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0429dd21-328a-4aed-9e67-f008635b6127" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.591231 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0429dd21-328a-4aed-9e67-f008635b6127" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.591411 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b8d659-c976-4095-baab-c6452d321fe2" containerName="keystone-cron" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.591441 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0429dd21-328a-4aed-9e67-f008635b6127" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.592234 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.596261 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.596701 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.596727 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.596807 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.597080 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.599481 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f"] Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.642715 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.642907 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.643137 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.643265 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.643351 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn99s\" (UniqueName: \"kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.745533 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.745683 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.745780 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.745840 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.745876 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn99s\" (UniqueName: \"kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.751244 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.751555 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.752017 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.752170 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.767115 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn99s\" (UniqueName: \"kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:01:59 crc kubenswrapper[4847]: I0218 01:01:59.919412 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:02:00 crc kubenswrapper[4847]: I0218 01:02:00.572591 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f"] Feb 18 01:02:01 crc kubenswrapper[4847]: I0218 01:02:01.473747 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" event={"ID":"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d","Type":"ContainerStarted","Data":"2516ac175e2763c8fb9f262acc97c985321513cb87d1ca2b67a5e0b3c9fb01e5"} Feb 18 01:02:01 crc kubenswrapper[4847]: I0218 01:02:01.474153 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" event={"ID":"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d","Type":"ContainerStarted","Data":"ae4d21bd297ace19092531045664e59f96f29686344746e3c10a1fbabbb7ea1d"} Feb 18 01:02:01 crc kubenswrapper[4847]: I0218 01:02:01.502783 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" podStartSLOduration=2.019061778 podStartE2EDuration="2.502751991s" podCreationTimestamp="2026-02-18 01:01:59 +0000 UTC" firstStartedPulling="2026-02-18 01:02:00.563779335 +0000 UTC m=+2193.941130307" lastFinishedPulling="2026-02-18 01:02:01.047469538 +0000 UTC m=+2194.424820520" observedRunningTime="2026-02-18 01:02:01.493740095 +0000 UTC m=+2194.871091037" watchObservedRunningTime="2026-02-18 01:02:01.502751991 +0000 UTC m=+2194.880102973" Feb 18 01:02:05 crc kubenswrapper[4847]: E0218 01:02:05.406883 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:02:06 crc kubenswrapper[4847]: I0218 01:02:06.404983 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:02:06 crc kubenswrapper[4847]: E0218 01:02:06.405574 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:02:09 crc kubenswrapper[4847]: E0218 01:02:09.406383 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:02:18 crc kubenswrapper[4847]: E0218 01:02:18.407939 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:02:21 crc kubenswrapper[4847]: I0218 01:02:21.405210 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:02:21 crc kubenswrapper[4847]: E0218 01:02:21.405955 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:02:23 crc kubenswrapper[4847]: E0218 01:02:23.408653 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:02:32 crc kubenswrapper[4847]: I0218 01:02:32.410081 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:02:32 crc kubenswrapper[4847]: E0218 01:02:32.516745 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:02:32 crc kubenswrapper[4847]: E0218 01:02:32.516812 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:02:32 crc kubenswrapper[4847]: E0218 01:02:32.516984 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:02:32 crc kubenswrapper[4847]: E0218 01:02:32.519004 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:02:33 crc kubenswrapper[4847]: I0218 01:02:33.405042 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:02:33 crc kubenswrapper[4847]: E0218 01:02:33.405784 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:02:38 crc kubenswrapper[4847]: E0218 01:02:38.409579 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:02:45 crc kubenswrapper[4847]: I0218 01:02:45.405778 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:02:45 crc kubenswrapper[4847]: E0218 01:02:45.407566 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:02:45 crc kubenswrapper[4847]: E0218 01:02:45.408891 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:02:51 crc kubenswrapper[4847]: E0218 01:02:51.535259 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:02:51 crc kubenswrapper[4847]: E0218 01:02:51.535741 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:02:51 crc kubenswrapper[4847]: E0218 01:02:51.535863 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:02:51 crc kubenswrapper[4847]: E0218 01:02:51.537059 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:02:59 crc kubenswrapper[4847]: I0218 01:02:59.404517 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:02:59 crc kubenswrapper[4847]: E0218 01:02:59.405629 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:02:59 crc kubenswrapper[4847]: E0218 01:02:59.406324 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:03:03 crc kubenswrapper[4847]: E0218 01:03:03.408581 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:03:10 crc kubenswrapper[4847]: I0218 01:03:10.405153 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:03:10 crc kubenswrapper[4847]: E0218 01:03:10.406225 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:03:14 crc kubenswrapper[4847]: E0218 01:03:14.407832 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:03:14 crc kubenswrapper[4847]: E0218 01:03:14.408641 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:03:25 crc kubenswrapper[4847]: I0218 01:03:25.405374 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:03:25 crc kubenswrapper[4847]: E0218 01:03:25.406131 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:03:26 crc kubenswrapper[4847]: E0218 01:03:26.415441 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:03:29 crc kubenswrapper[4847]: E0218 01:03:29.408046 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:03:37 crc kubenswrapper[4847]: E0218 01:03:37.424032 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:03:39 crc kubenswrapper[4847]: I0218 01:03:39.405577 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:03:39 crc kubenswrapper[4847]: E0218 01:03:39.406515 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:03:40 crc kubenswrapper[4847]: E0218 01:03:40.410292 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:03:50 crc kubenswrapper[4847]: I0218 01:03:50.404516 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:03:50 crc kubenswrapper[4847]: E0218 01:03:50.405782 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:03:51 crc kubenswrapper[4847]: E0218 01:03:51.410310 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:03:51 crc kubenswrapper[4847]: E0218 01:03:51.410588 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:02 crc kubenswrapper[4847]: I0218 01:04:02.404755 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:04:02 crc kubenswrapper[4847]: E0218 01:04:02.405738 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:04:04 crc kubenswrapper[4847]: E0218 01:04:04.412429 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:06 crc kubenswrapper[4847]: E0218 01:04:06.408956 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:04:15 crc kubenswrapper[4847]: I0218 01:04:15.405352 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:04:15 crc kubenswrapper[4847]: E0218 01:04:15.406239 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:04:15 crc kubenswrapper[4847]: E0218 01:04:15.408227 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:17 crc kubenswrapper[4847]: E0218 01:04:17.408387 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:04:29 crc kubenswrapper[4847]: E0218 01:04:29.419712 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:04:29 crc kubenswrapper[4847]: E0218 01:04:29.421843 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:30 crc kubenswrapper[4847]: I0218 01:04:30.404403 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:04:30 crc kubenswrapper[4847]: E0218 01:04:30.405040 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:04:43 crc kubenswrapper[4847]: I0218 01:04:43.404504 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:04:43 crc kubenswrapper[4847]: E0218 01:04:43.405741 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:04:43 crc kubenswrapper[4847]: E0218 01:04:43.409463 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:43 crc kubenswrapper[4847]: E0218 01:04:43.409652 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:04:54 crc kubenswrapper[4847]: I0218 01:04:54.404974 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:04:54 crc kubenswrapper[4847]: E0218 01:04:54.406123 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:04:57 crc kubenswrapper[4847]: E0218 01:04:57.418984 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:04:58 crc kubenswrapper[4847]: E0218 01:04:58.407052 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:05:09 crc kubenswrapper[4847]: I0218 01:05:09.405293 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:05:09 crc kubenswrapper[4847]: E0218 01:05:09.406517 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:05:09 crc kubenswrapper[4847]: E0218 01:05:09.407658 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:05:12 crc kubenswrapper[4847]: E0218 01:05:12.407710 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:05:21 crc kubenswrapper[4847]: E0218 01:05:21.406508 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:05:22 crc kubenswrapper[4847]: I0218 01:05:22.404811 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:05:22 crc kubenswrapper[4847]: E0218 01:05:22.405177 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:05:24 crc kubenswrapper[4847]: E0218 01:05:24.407314 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:05:35 crc kubenswrapper[4847]: E0218 01:05:35.406853 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:05:36 crc kubenswrapper[4847]: I0218 01:05:36.405222 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:05:36 crc kubenswrapper[4847]: E0218 01:05:36.406087 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:05:39 crc kubenswrapper[4847]: E0218 01:05:39.407782 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:05:47 crc kubenswrapper[4847]: I0218 01:05:47.424649 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:05:47 crc kubenswrapper[4847]: E0218 01:05:47.425717 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:05:49 crc kubenswrapper[4847]: E0218 01:05:49.407478 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:05:53 crc kubenswrapper[4847]: E0218 01:05:53.408467 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:01 crc kubenswrapper[4847]: I0218 01:06:01.406160 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:06:01 crc kubenswrapper[4847]: E0218 01:06:01.407668 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:06:02 crc kubenswrapper[4847]: E0218 01:06:02.408823 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:06:05 crc kubenswrapper[4847]: E0218 01:06:05.408874 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:09 crc kubenswrapper[4847]: I0218 01:06:09.560412 4847 generic.go:334] "Generic (PLEG): container finished" podID="d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" containerID="2516ac175e2763c8fb9f262acc97c985321513cb87d1ca2b67a5e0b3c9fb01e5" exitCode=0 Feb 18 01:06:09 crc kubenswrapper[4847]: I0218 01:06:09.560629 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" event={"ID":"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d","Type":"ContainerDied","Data":"2516ac175e2763c8fb9f262acc97c985321513cb87d1ca2b67a5e0b3c9fb01e5"} Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.096960 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.259910 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn99s\" (UniqueName: \"kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s\") pod \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.260315 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam\") pod \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.260402 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory\") pod \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.260445 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0\") pod \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.260465 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle\") pod \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\" (UID: \"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d\") " Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.266835 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s" (OuterVolumeSpecName: "kube-api-access-cn99s") pod "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" (UID: "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d"). InnerVolumeSpecName "kube-api-access-cn99s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.277193 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" (UID: "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.289280 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" (UID: "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.291494 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" (UID: "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.298363 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory" (OuterVolumeSpecName: "inventory") pod "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" (UID: "d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.362774 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn99s\" (UniqueName: \"kubernetes.io/projected/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-kube-api-access-cn99s\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.362836 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.362847 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.362857 4847 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.362866 4847 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.583357 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" event={"ID":"d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d","Type":"ContainerDied","Data":"ae4d21bd297ace19092531045664e59f96f29686344746e3c10a1fbabbb7ea1d"} Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.583409 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae4d21bd297ace19092531045664e59f96f29686344746e3c10a1fbabbb7ea1d" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.583442 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.745166 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9"] Feb 18 01:06:11 crc kubenswrapper[4847]: E0218 01:06:11.745857 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.745892 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.746415 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.747768 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.760639 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9"] Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.761672 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.762038 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.762386 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.762581 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.762814 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.884897 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.884977 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.885023 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.885050 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.885086 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.885210 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.885244 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbmkl\" (UniqueName: \"kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987271 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987354 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987413 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987449 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987473 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987588 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.987718 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbmkl\" (UniqueName: \"kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.992165 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.992264 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.992915 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.994556 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:11 crc kubenswrapper[4847]: I0218 01:06:11.994561 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:12 crc kubenswrapper[4847]: I0218 01:06:12.000058 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:12 crc kubenswrapper[4847]: I0218 01:06:12.010136 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbmkl\" (UniqueName: \"kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:12 crc kubenswrapper[4847]: I0218 01:06:12.094260 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:06:12 crc kubenswrapper[4847]: I0218 01:06:12.404472 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:06:12 crc kubenswrapper[4847]: E0218 01:06:12.405087 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:06:12 crc kubenswrapper[4847]: I0218 01:06:12.720181 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9"] Feb 18 01:06:13 crc kubenswrapper[4847]: I0218 01:06:13.610727 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" event={"ID":"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4","Type":"ContainerStarted","Data":"1774fb0811c7963bc456fd0e273833b5b7e1ebbcd89e2859f8b779ac1eb7d1bf"} Feb 18 01:06:13 crc kubenswrapper[4847]: I0218 01:06:13.613110 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" event={"ID":"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4","Type":"ContainerStarted","Data":"9a998546c0923a9a924add4dec15db77ed3a17b666d7bfd6c26fa943a9342d26"} Feb 18 01:06:13 crc kubenswrapper[4847]: I0218 01:06:13.635570 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" podStartSLOduration=2.14082195 podStartE2EDuration="2.635538678s" podCreationTimestamp="2026-02-18 01:06:11 +0000 UTC" firstStartedPulling="2026-02-18 01:06:12.729344575 +0000 UTC m=+2446.106695557" lastFinishedPulling="2026-02-18 01:06:13.224061333 +0000 UTC m=+2446.601412285" observedRunningTime="2026-02-18 01:06:13.629115543 +0000 UTC m=+2447.006466485" watchObservedRunningTime="2026-02-18 01:06:13.635538678 +0000 UTC m=+2447.012889660" Feb 18 01:06:15 crc kubenswrapper[4847]: E0218 01:06:15.407540 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:06:20 crc kubenswrapper[4847]: E0218 01:06:20.409284 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:24 crc kubenswrapper[4847]: I0218 01:06:24.405217 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:06:24 crc kubenswrapper[4847]: E0218 01:06:24.406376 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:06:28 crc kubenswrapper[4847]: E0218 01:06:28.407208 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:06:32 crc kubenswrapper[4847]: E0218 01:06:32.408457 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:37 crc kubenswrapper[4847]: I0218 01:06:37.410849 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:06:37 crc kubenswrapper[4847]: E0218 01:06:37.412106 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:06:42 crc kubenswrapper[4847]: E0218 01:06:42.408726 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:06:43 crc kubenswrapper[4847]: E0218 01:06:43.407059 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:51 crc kubenswrapper[4847]: I0218 01:06:51.404547 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:06:51 crc kubenswrapper[4847]: E0218 01:06:51.405401 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:06:54 crc kubenswrapper[4847]: E0218 01:06:54.407685 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:06:55 crc kubenswrapper[4847]: E0218 01:06:55.408305 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:07:02 crc kubenswrapper[4847]: I0218 01:07:02.405261 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:07:03 crc kubenswrapper[4847]: I0218 01:07:03.232792 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2"} Feb 18 01:07:05 crc kubenswrapper[4847]: E0218 01:07:05.407781 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:07:09 crc kubenswrapper[4847]: E0218 01:07:09.412763 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:07:10 crc kubenswrapper[4847]: I0218 01:07:10.321884 4847 generic.go:334] "Generic (PLEG): container finished" podID="4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" containerID="1774fb0811c7963bc456fd0e273833b5b7e1ebbcd89e2859f8b779ac1eb7d1bf" exitCode=2 Feb 18 01:07:10 crc kubenswrapper[4847]: I0218 01:07:10.321946 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" event={"ID":"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4","Type":"ContainerDied","Data":"1774fb0811c7963bc456fd0e273833b5b7e1ebbcd89e2859f8b779ac1eb7d1bf"} Feb 18 01:07:11 crc kubenswrapper[4847]: I0218 01:07:11.921692 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.021882 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbmkl\" (UniqueName: \"kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.022367 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.029955 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl" (OuterVolumeSpecName: "kube-api-access-lbmkl") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "kube-api-access-lbmkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.068072 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.123560 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.123636 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.123802 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.123824 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.123909 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle\") pod \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\" (UID: \"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4\") " Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.124402 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.124427 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbmkl\" (UniqueName: \"kubernetes.io/projected/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-kube-api-access-lbmkl\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.130324 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.153577 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory" (OuterVolumeSpecName: "inventory") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.157811 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.162380 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.171741 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" (UID: "4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.228298 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.228369 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.228391 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.228409 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.228425 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.352046 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" event={"ID":"4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4","Type":"ContainerDied","Data":"9a998546c0923a9a924add4dec15db77ed3a17b666d7bfd6c26fa943a9342d26"} Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.352107 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a998546c0923a9a924add4dec15db77ed3a17b666d7bfd6c26fa943a9342d26" Feb 18 01:07:12 crc kubenswrapper[4847]: I0218 01:07:12.352103 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9" Feb 18 01:07:18 crc kubenswrapper[4847]: E0218 01:07:18.407500 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.054652 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69"] Feb 18 01:07:20 crc kubenswrapper[4847]: E0218 01:07:20.055643 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.055666 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.056127 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.057377 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.060681 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.061356 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.061540 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.061788 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.061962 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.068157 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69"] Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.117993 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118084 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qqdk\" (UniqueName: \"kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118235 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118288 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118315 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118522 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.118706 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.220681 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.220818 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qqdk\" (UniqueName: \"kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.221018 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.221108 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.222029 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.222130 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.222223 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.228287 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.228293 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.228482 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.228515 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.228876 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.229352 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.240841 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qqdk\" (UniqueName: \"kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-pph69\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.381711 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:07:20 crc kubenswrapper[4847]: E0218 01:07:20.405513 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:07:20 crc kubenswrapper[4847]: I0218 01:07:20.824935 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69"] Feb 18 01:07:20 crc kubenswrapper[4847]: W0218 01:07:20.833994 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3ece2f_0bb7_4404_b500_5da0aa7aea40.slice/crio-c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a WatchSource:0}: Error finding container c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a: Status 404 returned error can't find the container with id c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a Feb 18 01:07:21 crc kubenswrapper[4847]: I0218 01:07:21.495977 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" event={"ID":"8b3ece2f-0bb7-4404-b500-5da0aa7aea40","Type":"ContainerStarted","Data":"c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a"} Feb 18 01:07:22 crc kubenswrapper[4847]: I0218 01:07:22.508235 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" event={"ID":"8b3ece2f-0bb7-4404-b500-5da0aa7aea40","Type":"ContainerStarted","Data":"5c02b9d8eae7bd55532f4b8e69de88c091525c3000699ced614c2775f52b645d"} Feb 18 01:07:22 crc kubenswrapper[4847]: I0218 01:07:22.530970 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" podStartSLOduration=2.033235833 podStartE2EDuration="2.530949634s" podCreationTimestamp="2026-02-18 01:07:20 +0000 UTC" firstStartedPulling="2026-02-18 01:07:20.840290498 +0000 UTC m=+2514.217641450" lastFinishedPulling="2026-02-18 01:07:21.338004309 +0000 UTC m=+2514.715355251" observedRunningTime="2026-02-18 01:07:22.526414624 +0000 UTC m=+2515.903765576" watchObservedRunningTime="2026-02-18 01:07:22.530949634 +0000 UTC m=+2515.908300596" Feb 18 01:07:29 crc kubenswrapper[4847]: E0218 01:07:29.406901 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:07:33 crc kubenswrapper[4847]: I0218 01:07:33.407939 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:07:33 crc kubenswrapper[4847]: E0218 01:07:33.530439 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:07:33 crc kubenswrapper[4847]: E0218 01:07:33.530526 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:07:33 crc kubenswrapper[4847]: E0218 01:07:33.530754 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:07:33 crc kubenswrapper[4847]: E0218 01:07:33.531955 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:07:43 crc kubenswrapper[4847]: E0218 01:07:43.416033 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:07:45 crc kubenswrapper[4847]: E0218 01:07:45.406401 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:07:55 crc kubenswrapper[4847]: E0218 01:07:55.560214 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:07:55 crc kubenswrapper[4847]: E0218 01:07:55.560938 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:07:55 crc kubenswrapper[4847]: E0218 01:07:55.561089 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:07:55 crc kubenswrapper[4847]: E0218 01:07:55.562521 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:08:00 crc kubenswrapper[4847]: E0218 01:08:00.405825 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:08:10 crc kubenswrapper[4847]: E0218 01:08:10.408659 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:08:12 crc kubenswrapper[4847]: E0218 01:08:12.406211 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:08:15 crc kubenswrapper[4847]: I0218 01:08:15.939941 4847 generic.go:334] "Generic (PLEG): container finished" podID="8b3ece2f-0bb7-4404-b500-5da0aa7aea40" containerID="5c02b9d8eae7bd55532f4b8e69de88c091525c3000699ced614c2775f52b645d" exitCode=2 Feb 18 01:08:15 crc kubenswrapper[4847]: I0218 01:08:15.940038 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" event={"ID":"8b3ece2f-0bb7-4404-b500-5da0aa7aea40","Type":"ContainerDied","Data":"5c02b9d8eae7bd55532f4b8e69de88c091525c3000699ced614c2775f52b645d"} Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.381524 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.520422 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.520818 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.520951 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qqdk\" (UniqueName: \"kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.521127 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.521159 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.521207 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.521255 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1\") pod \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\" (UID: \"8b3ece2f-0bb7-4404-b500-5da0aa7aea40\") " Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.537731 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.538464 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk" (OuterVolumeSpecName: "kube-api-access-9qqdk") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "kube-api-access-9qqdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.556893 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.560101 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.567345 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.574454 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.591024 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory" (OuterVolumeSpecName: "inventory") pod "8b3ece2f-0bb7-4404-b500-5da0aa7aea40" (UID: "8b3ece2f-0bb7-4404-b500-5da0aa7aea40"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624221 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624277 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624291 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qqdk\" (UniqueName: \"kubernetes.io/projected/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-kube-api-access-9qqdk\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624302 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624311 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624319 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.624330 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8b3ece2f-0bb7-4404-b500-5da0aa7aea40-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.962841 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" event={"ID":"8b3ece2f-0bb7-4404-b500-5da0aa7aea40","Type":"ContainerDied","Data":"c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a"} Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.962888 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a" Feb 18 01:08:17 crc kubenswrapper[4847]: I0218 01:08:17.962930 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-pph69" Feb 18 01:08:18 crc kubenswrapper[4847]: E0218 01:08:18.237938 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3ece2f_0bb7_4404_b500_5da0aa7aea40.slice/crio-c5a4722ea443d6fa28fca424199db70feb2936c8a9e8b7bcb94993da4161867a\": RecentStats: unable to find data in memory cache]" Feb 18 01:08:24 crc kubenswrapper[4847]: E0218 01:08:24.406373 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:08:24 crc kubenswrapper[4847]: E0218 01:08:24.406704 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.044883 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks"] Feb 18 01:08:35 crc kubenswrapper[4847]: E0218 01:08:35.045959 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3ece2f-0bb7-4404-b500-5da0aa7aea40" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.045977 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3ece2f-0bb7-4404-b500-5da0aa7aea40" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.046251 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3ece2f-0bb7-4404-b500-5da0aa7aea40" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.047127 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.051048 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.051171 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.051657 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.052911 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.053077 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.072529 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks"] Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182260 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdd4\" (UniqueName: \"kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182525 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182585 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182714 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182753 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182803 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.182837 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.284830 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.284888 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.284951 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.284989 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.285045 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.285078 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.285107 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdd4\" (UniqueName: \"kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.294755 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.297141 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.297512 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.299383 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.299940 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.301450 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.315952 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdd4\" (UniqueName: \"kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mzrks\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.384169 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:08:35 crc kubenswrapper[4847]: I0218 01:08:35.999159 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks"] Feb 18 01:08:36 crc kubenswrapper[4847]: I0218 01:08:36.184417 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" event={"ID":"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1","Type":"ContainerStarted","Data":"3ed3d0eb65eb9bc2421f9d861041b1a135b50ae7c8c9952b5a47a24d8f61e9f5"} Feb 18 01:08:37 crc kubenswrapper[4847]: I0218 01:08:37.196182 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" event={"ID":"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1","Type":"ContainerStarted","Data":"ea8e04b1194af32a50a815b085fe0173e762f0fd8232f474e845f4b0bafee75e"} Feb 18 01:08:37 crc kubenswrapper[4847]: I0218 01:08:37.234270 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" podStartSLOduration=1.811039655 podStartE2EDuration="2.234251186s" podCreationTimestamp="2026-02-18 01:08:35 +0000 UTC" firstStartedPulling="2026-02-18 01:08:36.002856356 +0000 UTC m=+2589.380207338" lastFinishedPulling="2026-02-18 01:08:36.426067887 +0000 UTC m=+2589.803418869" observedRunningTime="2026-02-18 01:08:37.219006504 +0000 UTC m=+2590.596357476" watchObservedRunningTime="2026-02-18 01:08:37.234251186 +0000 UTC m=+2590.611602138" Feb 18 01:08:38 crc kubenswrapper[4847]: E0218 01:08:38.407756 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:08:39 crc kubenswrapper[4847]: E0218 01:08:39.407688 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:08:51 crc kubenswrapper[4847]: E0218 01:08:51.406271 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:08:54 crc kubenswrapper[4847]: E0218 01:08:54.407866 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:02 crc kubenswrapper[4847]: E0218 01:09:02.408053 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:09:06 crc kubenswrapper[4847]: E0218 01:09:06.408309 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:14 crc kubenswrapper[4847]: E0218 01:09:14.408993 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.904473 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.907252 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.919650 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.990916 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.990997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdxl\" (UniqueName: \"kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:14 crc kubenswrapper[4847]: I0218 01:09:14.991416 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.093375 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.093441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdxl\" (UniqueName: \"kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.093527 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.094021 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.094014 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.119691 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdxl\" (UniqueName: \"kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl\") pod \"redhat-marketplace-lzqc8\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.238486 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:15 crc kubenswrapper[4847]: I0218 01:09:15.762241 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:16 crc kubenswrapper[4847]: I0218 01:09:16.659626 4847 generic.go:334] "Generic (PLEG): container finished" podID="848ec456-147b-4af3-9da7-6b01f870d16f" containerID="972ce2ed4c22f4f0813d6e375ae586437e3144a9e8de123375e3e40cd9a61ed9" exitCode=0 Feb 18 01:09:16 crc kubenswrapper[4847]: I0218 01:09:16.659729 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerDied","Data":"972ce2ed4c22f4f0813d6e375ae586437e3144a9e8de123375e3e40cd9a61ed9"} Feb 18 01:09:16 crc kubenswrapper[4847]: I0218 01:09:16.660010 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerStarted","Data":"896bb900ee51a5299beaa97aa133e2c8e25e6d1741b3c4941bc2d8373913f0a4"} Feb 18 01:09:17 crc kubenswrapper[4847]: I0218 01:09:17.673671 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerStarted","Data":"a144e8bc1abbcd00d1a97e6da38dfc53673ed10bbc5ea3da79fa492e297224d6"} Feb 18 01:09:18 crc kubenswrapper[4847]: E0218 01:09:18.407296 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:18 crc kubenswrapper[4847]: I0218 01:09:18.686279 4847 generic.go:334] "Generic (PLEG): container finished" podID="848ec456-147b-4af3-9da7-6b01f870d16f" containerID="a144e8bc1abbcd00d1a97e6da38dfc53673ed10bbc5ea3da79fa492e297224d6" exitCode=0 Feb 18 01:09:18 crc kubenswrapper[4847]: I0218 01:09:18.686374 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerDied","Data":"a144e8bc1abbcd00d1a97e6da38dfc53673ed10bbc5ea3da79fa492e297224d6"} Feb 18 01:09:19 crc kubenswrapper[4847]: I0218 01:09:19.699978 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerStarted","Data":"ecb31791c7cd1a8965eeb3a23fc1367d1c6586e9187417a495e50daf827cfcd5"} Feb 18 01:09:19 crc kubenswrapper[4847]: I0218 01:09:19.733348 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzqc8" podStartSLOduration=3.311662817 podStartE2EDuration="5.733327626s" podCreationTimestamp="2026-02-18 01:09:14 +0000 UTC" firstStartedPulling="2026-02-18 01:09:16.66290901 +0000 UTC m=+2630.040259982" lastFinishedPulling="2026-02-18 01:09:19.084573809 +0000 UTC m=+2632.461924791" observedRunningTime="2026-02-18 01:09:19.723281901 +0000 UTC m=+2633.100632873" watchObservedRunningTime="2026-02-18 01:09:19.733327626 +0000 UTC m=+2633.110678588" Feb 18 01:09:23 crc kubenswrapper[4847]: I0218 01:09:23.491550 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:09:23 crc kubenswrapper[4847]: I0218 01:09:23.492389 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:09:25 crc kubenswrapper[4847]: I0218 01:09:25.239207 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:25 crc kubenswrapper[4847]: I0218 01:09:25.241773 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:25 crc kubenswrapper[4847]: I0218 01:09:25.327127 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:25 crc kubenswrapper[4847]: I0218 01:09:25.854102 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:25 crc kubenswrapper[4847]: I0218 01:09:25.933136 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:27 crc kubenswrapper[4847]: I0218 01:09:27.794567 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lzqc8" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="registry-server" containerID="cri-o://ecb31791c7cd1a8965eeb3a23fc1367d1c6586e9187417a495e50daf827cfcd5" gracePeriod=2 Feb 18 01:09:28 crc kubenswrapper[4847]: E0218 01:09:28.407236 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.803747 4847 generic.go:334] "Generic (PLEG): container finished" podID="a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" containerID="ea8e04b1194af32a50a815b085fe0173e762f0fd8232f474e845f4b0bafee75e" exitCode=2 Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.803830 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" event={"ID":"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1","Type":"ContainerDied","Data":"ea8e04b1194af32a50a815b085fe0173e762f0fd8232f474e845f4b0bafee75e"} Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.807067 4847 generic.go:334] "Generic (PLEG): container finished" podID="848ec456-147b-4af3-9da7-6b01f870d16f" containerID="ecb31791c7cd1a8965eeb3a23fc1367d1c6586e9187417a495e50daf827cfcd5" exitCode=0 Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.807135 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerDied","Data":"ecb31791c7cd1a8965eeb3a23fc1367d1c6586e9187417a495e50daf827cfcd5"} Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.807379 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzqc8" event={"ID":"848ec456-147b-4af3-9da7-6b01f870d16f","Type":"ContainerDied","Data":"896bb900ee51a5299beaa97aa133e2c8e25e6d1741b3c4941bc2d8373913f0a4"} Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.807397 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="896bb900ee51a5299beaa97aa133e2c8e25e6d1741b3c4941bc2d8373913f0a4" Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.852171 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.972316 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content\") pod \"848ec456-147b-4af3-9da7-6b01f870d16f\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.972526 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdxl\" (UniqueName: \"kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl\") pod \"848ec456-147b-4af3-9da7-6b01f870d16f\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.972880 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities\") pod \"848ec456-147b-4af3-9da7-6b01f870d16f\" (UID: \"848ec456-147b-4af3-9da7-6b01f870d16f\") " Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.973855 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities" (OuterVolumeSpecName: "utilities") pod "848ec456-147b-4af3-9da7-6b01f870d16f" (UID: "848ec456-147b-4af3-9da7-6b01f870d16f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.982024 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl" (OuterVolumeSpecName: "kube-api-access-rtdxl") pod "848ec456-147b-4af3-9da7-6b01f870d16f" (UID: "848ec456-147b-4af3-9da7-6b01f870d16f"). InnerVolumeSpecName "kube-api-access-rtdxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:09:28 crc kubenswrapper[4847]: I0218 01:09:28.996494 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "848ec456-147b-4af3-9da7-6b01f870d16f" (UID: "848ec456-147b-4af3-9da7-6b01f870d16f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.074830 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.075246 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/848ec456-147b-4af3-9da7-6b01f870d16f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.075322 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdxl\" (UniqueName: \"kubernetes.io/projected/848ec456-147b-4af3-9da7-6b01f870d16f-kube-api-access-rtdxl\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.821954 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzqc8" Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.868632 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:29 crc kubenswrapper[4847]: I0218 01:09:29.882659 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzqc8"] Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.341948 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:09:30 crc kubenswrapper[4847]: E0218 01:09:30.410877 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.501595 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.501791 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.501928 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdd4\" (UniqueName: \"kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.501994 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.502059 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.502101 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.502149 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2\") pod \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\" (UID: \"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1\") " Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.513918 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4" (OuterVolumeSpecName: "kube-api-access-ptdd4") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "kube-api-access-ptdd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.514135 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.534074 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.536566 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory" (OuterVolumeSpecName: "inventory") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.558043 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.558287 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.560060 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" (UID: "a8cdefc7-b3d5-4ef5-a08b-611fed8486b1"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605661 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605797 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605817 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605834 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605847 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605863 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.605876 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptdd4\" (UniqueName: \"kubernetes.io/projected/a8cdefc7-b3d5-4ef5-a08b-611fed8486b1-kube-api-access-ptdd4\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.838964 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" event={"ID":"a8cdefc7-b3d5-4ef5-a08b-611fed8486b1","Type":"ContainerDied","Data":"3ed3d0eb65eb9bc2421f9d861041b1a135b50ae7c8c9952b5a47a24d8f61e9f5"} Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.839027 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed3d0eb65eb9bc2421f9d861041b1a135b50ae7c8c9952b5a47a24d8f61e9f5" Feb 18 01:09:30 crc kubenswrapper[4847]: I0218 01:09:30.839055 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mzrks" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.111694 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:31 crc kubenswrapper[4847]: E0218 01:09:31.112408 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="extract-utilities" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.112441 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="extract-utilities" Feb 18 01:09:31 crc kubenswrapper[4847]: E0218 01:09:31.112500 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.112516 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:09:31 crc kubenswrapper[4847]: E0218 01:09:31.112549 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="extract-content" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.112563 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="extract-content" Feb 18 01:09:31 crc kubenswrapper[4847]: E0218 01:09:31.112590 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="registry-server" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.112628 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="registry-server" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.113033 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" containerName="registry-server" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.113076 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8cdefc7-b3d5-4ef5-a08b-611fed8486b1" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.116545 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.143130 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.221779 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfb7m\" (UniqueName: \"kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.222387 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.222685 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.324433 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.324528 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.324586 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfb7m\" (UniqueName: \"kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.325320 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.325666 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.356533 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfb7m\" (UniqueName: \"kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m\") pod \"certified-operators-qhxgm\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.426765 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="848ec456-147b-4af3-9da7-6b01f870d16f" path="/var/lib/kubelet/pods/848ec456-147b-4af3-9da7-6b01f870d16f/volumes" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.463768 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:31 crc kubenswrapper[4847]: I0218 01:09:31.962254 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:31 crc kubenswrapper[4847]: W0218 01:09:31.962833 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57b62168_2a66_4a6b_b603_f5600faa7d4c.slice/crio-76847a9d70d1b35add398974973e182a986b1fb42b36db2ab9fbbf2e13fe6291 WatchSource:0}: Error finding container 76847a9d70d1b35add398974973e182a986b1fb42b36db2ab9fbbf2e13fe6291: Status 404 returned error can't find the container with id 76847a9d70d1b35add398974973e182a986b1fb42b36db2ab9fbbf2e13fe6291 Feb 18 01:09:32 crc kubenswrapper[4847]: I0218 01:09:32.864115 4847 generic.go:334] "Generic (PLEG): container finished" podID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerID="5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412" exitCode=0 Feb 18 01:09:32 crc kubenswrapper[4847]: I0218 01:09:32.864248 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerDied","Data":"5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412"} Feb 18 01:09:32 crc kubenswrapper[4847]: I0218 01:09:32.864539 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerStarted","Data":"76847a9d70d1b35add398974973e182a986b1fb42b36db2ab9fbbf2e13fe6291"} Feb 18 01:09:33 crc kubenswrapper[4847]: I0218 01:09:33.879642 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerStarted","Data":"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5"} Feb 18 01:09:34 crc kubenswrapper[4847]: I0218 01:09:34.902382 4847 generic.go:334] "Generic (PLEG): container finished" podID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerID="689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5" exitCode=0 Feb 18 01:09:34 crc kubenswrapper[4847]: I0218 01:09:34.902446 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerDied","Data":"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5"} Feb 18 01:09:35 crc kubenswrapper[4847]: I0218 01:09:35.919878 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerStarted","Data":"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2"} Feb 18 01:09:35 crc kubenswrapper[4847]: I0218 01:09:35.966079 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qhxgm" podStartSLOduration=2.258315523 podStartE2EDuration="4.966051485s" podCreationTimestamp="2026-02-18 01:09:31 +0000 UTC" firstStartedPulling="2026-02-18 01:09:32.867170465 +0000 UTC m=+2646.244521447" lastFinishedPulling="2026-02-18 01:09:35.574906437 +0000 UTC m=+2648.952257409" observedRunningTime="2026-02-18 01:09:35.95024986 +0000 UTC m=+2649.327600812" watchObservedRunningTime="2026-02-18 01:09:35.966051485 +0000 UTC m=+2649.343402467" Feb 18 01:09:41 crc kubenswrapper[4847]: I0218 01:09:41.464247 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:41 crc kubenswrapper[4847]: I0218 01:09:41.465033 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:41 crc kubenswrapper[4847]: I0218 01:09:41.554533 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:42 crc kubenswrapper[4847]: I0218 01:09:42.077916 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:42 crc kubenswrapper[4847]: I0218 01:09:42.154140 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:43 crc kubenswrapper[4847]: E0218 01:09:43.406640 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.002369 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qhxgm" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="registry-server" containerID="cri-o://e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2" gracePeriod=2 Feb 18 01:09:44 crc kubenswrapper[4847]: E0218 01:09:44.406492 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.600444 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.672373 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfb7m\" (UniqueName: \"kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m\") pod \"57b62168-2a66-4a6b-b603-f5600faa7d4c\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.672760 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities\") pod \"57b62168-2a66-4a6b-b603-f5600faa7d4c\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.672814 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content\") pod \"57b62168-2a66-4a6b-b603-f5600faa7d4c\" (UID: \"57b62168-2a66-4a6b-b603-f5600faa7d4c\") " Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.673549 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities" (OuterVolumeSpecName: "utilities") pod "57b62168-2a66-4a6b-b603-f5600faa7d4c" (UID: "57b62168-2a66-4a6b-b603-f5600faa7d4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.675411 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.682347 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m" (OuterVolumeSpecName: "kube-api-access-xfb7m") pod "57b62168-2a66-4a6b-b603-f5600faa7d4c" (UID: "57b62168-2a66-4a6b-b603-f5600faa7d4c"). InnerVolumeSpecName "kube-api-access-xfb7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.733129 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57b62168-2a66-4a6b-b603-f5600faa7d4c" (UID: "57b62168-2a66-4a6b-b603-f5600faa7d4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.776986 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57b62168-2a66-4a6b-b603-f5600faa7d4c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:44 crc kubenswrapper[4847]: I0218 01:09:44.777020 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfb7m\" (UniqueName: \"kubernetes.io/projected/57b62168-2a66-4a6b-b603-f5600faa7d4c-kube-api-access-xfb7m\") on node \"crc\" DevicePath \"\"" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.021312 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qhxgm" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.021341 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerDied","Data":"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2"} Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.021429 4847 scope.go:117] "RemoveContainer" containerID="e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.021278 4847 generic.go:334] "Generic (PLEG): container finished" podID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerID="e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2" exitCode=0 Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.021730 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qhxgm" event={"ID":"57b62168-2a66-4a6b-b603-f5600faa7d4c","Type":"ContainerDied","Data":"76847a9d70d1b35add398974973e182a986b1fb42b36db2ab9fbbf2e13fe6291"} Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.045549 4847 scope.go:117] "RemoveContainer" containerID="689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.082105 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.103038 4847 scope.go:117] "RemoveContainer" containerID="5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.108463 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qhxgm"] Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.160383 4847 scope.go:117] "RemoveContainer" containerID="e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2" Feb 18 01:09:45 crc kubenswrapper[4847]: E0218 01:09:45.160953 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2\": container with ID starting with e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2 not found: ID does not exist" containerID="e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.160987 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2"} err="failed to get container status \"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2\": rpc error: code = NotFound desc = could not find container \"e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2\": container with ID starting with e68f06b2c04fe5d573ea35cbf91cfd1ad9dca5a0f8a323c87c3db08b0272acd2 not found: ID does not exist" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.161013 4847 scope.go:117] "RemoveContainer" containerID="689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5" Feb 18 01:09:45 crc kubenswrapper[4847]: E0218 01:09:45.161692 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5\": container with ID starting with 689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5 not found: ID does not exist" containerID="689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.161903 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5"} err="failed to get container status \"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5\": rpc error: code = NotFound desc = could not find container \"689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5\": container with ID starting with 689aeb2887f11ffd0097565a300a517467a6a8fccb5452efef7411ceb2057ab5 not found: ID does not exist" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.162060 4847 scope.go:117] "RemoveContainer" containerID="5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412" Feb 18 01:09:45 crc kubenswrapper[4847]: E0218 01:09:45.162657 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412\": container with ID starting with 5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412 not found: ID does not exist" containerID="5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.162709 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412"} err="failed to get container status \"5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412\": rpc error: code = NotFound desc = could not find container \"5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412\": container with ID starting with 5de3ee9d12f54092946527c05e7351e70810d118b7e345a7114f2187c3c9e412 not found: ID does not exist" Feb 18 01:09:45 crc kubenswrapper[4847]: I0218 01:09:45.426445 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" path="/var/lib/kubelet/pods/57b62168-2a66-4a6b-b603-f5600faa7d4c/volumes" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.825689 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:09:50 crc kubenswrapper[4847]: E0218 01:09:50.826972 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="registry-server" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.826996 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="registry-server" Feb 18 01:09:50 crc kubenswrapper[4847]: E0218 01:09:50.827016 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="extract-utilities" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.827026 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="extract-utilities" Feb 18 01:09:50 crc kubenswrapper[4847]: E0218 01:09:50.827049 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="extract-content" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.827060 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="extract-content" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.827417 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b62168-2a66-4a6b-b603-f5600faa7d4c" containerName="registry-server" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.835060 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.842441 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.922471 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntsg4\" (UniqueName: \"kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.922773 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:50 crc kubenswrapper[4847]: I0218 01:09:50.922815 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.024751 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntsg4\" (UniqueName: \"kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.025103 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.025128 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.025507 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.025620 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.046252 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntsg4\" (UniqueName: \"kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4\") pod \"redhat-operators-6x27p\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.184431 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:09:51 crc kubenswrapper[4847]: I0218 01:09:51.692971 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:09:52 crc kubenswrapper[4847]: I0218 01:09:52.108913 4847 generic.go:334] "Generic (PLEG): container finished" podID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerID="0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4" exitCode=0 Feb 18 01:09:52 crc kubenswrapper[4847]: I0218 01:09:52.108976 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerDied","Data":"0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4"} Feb 18 01:09:52 crc kubenswrapper[4847]: I0218 01:09:52.109142 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerStarted","Data":"23eb8d55b35340c41ba2a4a73af6910d3e68f0d7daec91c0cb63a6f8b5d47689"} Feb 18 01:09:53 crc kubenswrapper[4847]: I0218 01:09:53.127960 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerStarted","Data":"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104"} Feb 18 01:09:53 crc kubenswrapper[4847]: I0218 01:09:53.492316 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:09:53 crc kubenswrapper[4847]: I0218 01:09:53.492417 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:09:56 crc kubenswrapper[4847]: I0218 01:09:56.184500 4847 generic.go:334] "Generic (PLEG): container finished" podID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerID="fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104" exitCode=0 Feb 18 01:09:56 crc kubenswrapper[4847]: I0218 01:09:56.184659 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerDied","Data":"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104"} Feb 18 01:09:57 crc kubenswrapper[4847]: I0218 01:09:57.197860 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerStarted","Data":"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4"} Feb 18 01:09:58 crc kubenswrapper[4847]: E0218 01:09:58.406670 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:09:58 crc kubenswrapper[4847]: E0218 01:09:58.406836 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:10:01 crc kubenswrapper[4847]: I0218 01:10:01.185516 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:01 crc kubenswrapper[4847]: I0218 01:10:01.187363 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:02 crc kubenswrapper[4847]: I0218 01:10:02.265288 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6x27p" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="registry-server" probeResult="failure" output=< Feb 18 01:10:02 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:10:02 crc kubenswrapper[4847]: > Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.056464 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6x27p" podStartSLOduration=13.549473361 podStartE2EDuration="18.056427996s" podCreationTimestamp="2026-02-18 01:09:50 +0000 UTC" firstStartedPulling="2026-02-18 01:09:52.112019766 +0000 UTC m=+2665.489370708" lastFinishedPulling="2026-02-18 01:09:56.618974351 +0000 UTC m=+2669.996325343" observedRunningTime="2026-02-18 01:09:57.226939753 +0000 UTC m=+2670.604290745" watchObservedRunningTime="2026-02-18 01:10:08.056427996 +0000 UTC m=+2681.433778978" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.060858 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4"] Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.063361 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.066751 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.067193 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.067404 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.067786 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.073050 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.076929 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4"] Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.158421 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.158927 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrpxc\" (UniqueName: \"kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.158994 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.159017 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.159257 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.159350 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.159432 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.261760 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.261889 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrpxc\" (UniqueName: \"kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.261954 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.261983 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.262047 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.262091 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.262123 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.269765 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.270037 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.270790 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.274791 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.277314 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.281392 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrpxc\" (UniqueName: \"kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.281996 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:08 crc kubenswrapper[4847]: I0218 01:10:08.404224 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:10:09 crc kubenswrapper[4847]: I0218 01:10:09.005047 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4"] Feb 18 01:10:09 crc kubenswrapper[4847]: I0218 01:10:09.366901 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" event={"ID":"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0","Type":"ContainerStarted","Data":"217a7e581941ec492be056a7086b5a49a3b067b4758e976239d27296209a8b53"} Feb 18 01:10:10 crc kubenswrapper[4847]: I0218 01:10:10.379104 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" event={"ID":"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0","Type":"ContainerStarted","Data":"c04895ae00bd34c0741998d0029fe4823724ead59cee8c027510281b55827d58"} Feb 18 01:10:10 crc kubenswrapper[4847]: I0218 01:10:10.411459 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" podStartSLOduration=1.921048637 podStartE2EDuration="2.411440118s" podCreationTimestamp="2026-02-18 01:10:08 +0000 UTC" firstStartedPulling="2026-02-18 01:10:09.005701491 +0000 UTC m=+2682.383052443" lastFinishedPulling="2026-02-18 01:10:09.496092942 +0000 UTC m=+2682.873443924" observedRunningTime="2026-02-18 01:10:10.40579655 +0000 UTC m=+2683.783147512" watchObservedRunningTime="2026-02-18 01:10:10.411440118 +0000 UTC m=+2683.788791070" Feb 18 01:10:11 crc kubenswrapper[4847]: I0218 01:10:11.251884 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:11 crc kubenswrapper[4847]: I0218 01:10:11.320642 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:11 crc kubenswrapper[4847]: E0218 01:10:11.413802 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:10:11 crc kubenswrapper[4847]: I0218 01:10:11.490491 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.400141 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6x27p" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="registry-server" containerID="cri-o://da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4" gracePeriod=2 Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.922642 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.973074 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities\") pod \"2ced1da1-dd7a-4010-a633-e93617d53dc5\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.973377 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content\") pod \"2ced1da1-dd7a-4010-a633-e93617d53dc5\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.973436 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntsg4\" (UniqueName: \"kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4\") pod \"2ced1da1-dd7a-4010-a633-e93617d53dc5\" (UID: \"2ced1da1-dd7a-4010-a633-e93617d53dc5\") " Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.976396 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities" (OuterVolumeSpecName: "utilities") pod "2ced1da1-dd7a-4010-a633-e93617d53dc5" (UID: "2ced1da1-dd7a-4010-a633-e93617d53dc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:10:12 crc kubenswrapper[4847]: I0218 01:10:12.981525 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4" (OuterVolumeSpecName: "kube-api-access-ntsg4") pod "2ced1da1-dd7a-4010-a633-e93617d53dc5" (UID: "2ced1da1-dd7a-4010-a633-e93617d53dc5"). InnerVolumeSpecName "kube-api-access-ntsg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.076163 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.076196 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntsg4\" (UniqueName: \"kubernetes.io/projected/2ced1da1-dd7a-4010-a633-e93617d53dc5-kube-api-access-ntsg4\") on node \"crc\" DevicePath \"\"" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.106683 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ced1da1-dd7a-4010-a633-e93617d53dc5" (UID: "2ced1da1-dd7a-4010-a633-e93617d53dc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.178206 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ced1da1-dd7a-4010-a633-e93617d53dc5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:10:13 crc kubenswrapper[4847]: E0218 01:10:13.406778 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.412345 4847 generic.go:334] "Generic (PLEG): container finished" podID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerID="da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4" exitCode=0 Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.412453 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6x27p" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.457093 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerDied","Data":"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4"} Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.457142 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6x27p" event={"ID":"2ced1da1-dd7a-4010-a633-e93617d53dc5","Type":"ContainerDied","Data":"23eb8d55b35340c41ba2a4a73af6910d3e68f0d7daec91c0cb63a6f8b5d47689"} Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.457176 4847 scope.go:117] "RemoveContainer" containerID="da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.495013 4847 scope.go:117] "RemoveContainer" containerID="fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.541012 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.553841 4847 scope.go:117] "RemoveContainer" containerID="0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.556915 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6x27p"] Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.596759 4847 scope.go:117] "RemoveContainer" containerID="da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4" Feb 18 01:10:13 crc kubenswrapper[4847]: E0218 01:10:13.597325 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4\": container with ID starting with da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4 not found: ID does not exist" containerID="da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.597416 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4"} err="failed to get container status \"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4\": rpc error: code = NotFound desc = could not find container \"da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4\": container with ID starting with da42d08b7a8dba7a7241f6240068c5d4d02e92d528354ec142f1ef35dcaae2d4 not found: ID does not exist" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.597521 4847 scope.go:117] "RemoveContainer" containerID="fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104" Feb 18 01:10:13 crc kubenswrapper[4847]: E0218 01:10:13.597937 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104\": container with ID starting with fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104 not found: ID does not exist" containerID="fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.598006 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104"} err="failed to get container status \"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104\": rpc error: code = NotFound desc = could not find container \"fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104\": container with ID starting with fdcbf5e8ce557f4fe114105dc3d1ec6d8e63f968b547ceef27f38d5843a06104 not found: ID does not exist" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.598041 4847 scope.go:117] "RemoveContainer" containerID="0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4" Feb 18 01:10:13 crc kubenswrapper[4847]: E0218 01:10:13.598369 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4\": container with ID starting with 0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4 not found: ID does not exist" containerID="0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4" Feb 18 01:10:13 crc kubenswrapper[4847]: I0218 01:10:13.598411 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4"} err="failed to get container status \"0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4\": rpc error: code = NotFound desc = could not find container \"0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4\": container with ID starting with 0216e7a7e0d3ceb599fb4deff12845282c97e4c798183602f50426f1175090e4 not found: ID does not exist" Feb 18 01:10:15 crc kubenswrapper[4847]: I0218 01:10:15.426373 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" path="/var/lib/kubelet/pods/2ced1da1-dd7a-4010-a633-e93617d53dc5/volumes" Feb 18 01:10:23 crc kubenswrapper[4847]: I0218 01:10:23.492281 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:10:23 crc kubenswrapper[4847]: I0218 01:10:23.492971 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:10:23 crc kubenswrapper[4847]: I0218 01:10:23.493039 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:10:23 crc kubenswrapper[4847]: I0218 01:10:23.494200 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:10:23 crc kubenswrapper[4847]: I0218 01:10:23.494295 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2" gracePeriod=600 Feb 18 01:10:24 crc kubenswrapper[4847]: E0218 01:10:24.408091 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:10:24 crc kubenswrapper[4847]: I0218 01:10:24.579656 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2" exitCode=0 Feb 18 01:10:24 crc kubenswrapper[4847]: I0218 01:10:24.579715 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2"} Feb 18 01:10:24 crc kubenswrapper[4847]: I0218 01:10:24.579755 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073"} Feb 18 01:10:24 crc kubenswrapper[4847]: I0218 01:10:24.579779 4847 scope.go:117] "RemoveContainer" containerID="f53740fe893ddfb4ea8f2985c40e500fb4ac19b2e4206d7df93be8e178626970" Feb 18 01:10:27 crc kubenswrapper[4847]: E0218 01:10:27.426733 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:10:39 crc kubenswrapper[4847]: E0218 01:10:39.408573 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:10:39 crc kubenswrapper[4847]: E0218 01:10:39.409213 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:10:51 crc kubenswrapper[4847]: E0218 01:10:51.406426 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:10:52 crc kubenswrapper[4847]: E0218 01:10:52.405674 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:11:02 crc kubenswrapper[4847]: E0218 01:11:02.406847 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:11:03 crc kubenswrapper[4847]: I0218 01:11:03.049325 4847 generic.go:334] "Generic (PLEG): container finished" podID="9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" containerID="c04895ae00bd34c0741998d0029fe4823724ead59cee8c027510281b55827d58" exitCode=2 Feb 18 01:11:03 crc kubenswrapper[4847]: I0218 01:11:03.049459 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" event={"ID":"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0","Type":"ContainerDied","Data":"c04895ae00bd34c0741998d0029fe4823724ead59cee8c027510281b55827d58"} Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.639457 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826168 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrpxc\" (UniqueName: \"kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826288 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826338 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826446 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826520 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826644 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.826920 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle\") pod \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\" (UID: \"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0\") " Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.833220 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.834025 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc" (OuterVolumeSpecName: "kube-api-access-rrpxc") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "kube-api-access-rrpxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.868074 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.869348 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.870827 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory" (OuterVolumeSpecName: "inventory") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.876135 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.885165 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" (UID: "9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930060 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930104 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrpxc\" (UniqueName: \"kubernetes.io/projected/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-kube-api-access-rrpxc\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930120 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930133 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930146 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930159 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:04 crc kubenswrapper[4847]: I0218 01:11:04.930171 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:11:05 crc kubenswrapper[4847]: I0218 01:11:05.072504 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" event={"ID":"9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0","Type":"ContainerDied","Data":"217a7e581941ec492be056a7086b5a49a3b067b4758e976239d27296209a8b53"} Feb 18 01:11:05 crc kubenswrapper[4847]: I0218 01:11:05.072547 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="217a7e581941ec492be056a7086b5a49a3b067b4758e976239d27296209a8b53" Feb 18 01:11:05 crc kubenswrapper[4847]: I0218 01:11:05.072616 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4" Feb 18 01:11:06 crc kubenswrapper[4847]: E0218 01:11:06.407951 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:11:14 crc kubenswrapper[4847]: E0218 01:11:14.407182 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:11:17 crc kubenswrapper[4847]: E0218 01:11:17.418009 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:11:27 crc kubenswrapper[4847]: E0218 01:11:27.419136 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:11:31 crc kubenswrapper[4847]: E0218 01:11:31.407781 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:11:41 crc kubenswrapper[4847]: E0218 01:11:41.407099 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:11:45 crc kubenswrapper[4847]: E0218 01:11:45.407444 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:11:55 crc kubenswrapper[4847]: E0218 01:11:55.407671 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:11:58 crc kubenswrapper[4847]: E0218 01:11:58.406769 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:12:06 crc kubenswrapper[4847]: E0218 01:12:06.407702 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:12:10 crc kubenswrapper[4847]: E0218 01:12:10.407736 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:12:17 crc kubenswrapper[4847]: E0218 01:12:17.419177 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:12:21 crc kubenswrapper[4847]: E0218 01:12:21.406472 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.053095 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz"] Feb 18 01:12:22 crc kubenswrapper[4847]: E0218 01:12:22.054219 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="extract-utilities" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054248 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="extract-utilities" Feb 18 01:12:22 crc kubenswrapper[4847]: E0218 01:12:22.054285 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054305 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:12:22 crc kubenswrapper[4847]: E0218 01:12:22.054332 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="extract-content" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054347 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="extract-content" Feb 18 01:12:22 crc kubenswrapper[4847]: E0218 01:12:22.054384 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="registry-server" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054400 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="registry-server" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054795 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.054844 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ced1da1-dd7a-4010-a633-e93617d53dc5" containerName="registry-server" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.056069 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.060946 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.061067 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.061258 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.061755 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.073074 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.073804 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz"] Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.134873 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.134944 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.134976 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.135063 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98n8l\" (UniqueName: \"kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.135176 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.135219 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.135251 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.237785 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.238222 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.238431 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.238678 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.238872 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.239030 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.239800 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98n8l\" (UniqueName: \"kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.245960 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.246175 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.247309 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.247754 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.248162 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.251598 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.264779 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98n8l\" (UniqueName: \"kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-h68tz\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:22 crc kubenswrapper[4847]: I0218 01:12:22.397873 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:12:23 crc kubenswrapper[4847]: I0218 01:12:23.056327 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz"] Feb 18 01:12:23 crc kubenswrapper[4847]: I0218 01:12:23.491408 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:12:23 crc kubenswrapper[4847]: I0218 01:12:23.491972 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:12:24 crc kubenswrapper[4847]: I0218 01:12:24.035254 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" event={"ID":"e6120fb7-f119-4597-86d5-8c75dcffac32","Type":"ContainerStarted","Data":"215cd27f560d11fa27342ea616672f43e8913536ba20a3b1b2c277391ba44880"} Feb 18 01:12:24 crc kubenswrapper[4847]: I0218 01:12:24.035554 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" event={"ID":"e6120fb7-f119-4597-86d5-8c75dcffac32","Type":"ContainerStarted","Data":"2c2cbbd4040d2d2ec9db526db5bb99176b1dbc4f4b1ebd52bba27190cad18aa8"} Feb 18 01:12:24 crc kubenswrapper[4847]: I0218 01:12:24.062625 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" podStartSLOduration=1.637713985 podStartE2EDuration="2.062578552s" podCreationTimestamp="2026-02-18 01:12:22 +0000 UTC" firstStartedPulling="2026-02-18 01:12:23.037438548 +0000 UTC m=+2816.414789530" lastFinishedPulling="2026-02-18 01:12:23.462303145 +0000 UTC m=+2816.839654097" observedRunningTime="2026-02-18 01:12:24.055462725 +0000 UTC m=+2817.432813687" watchObservedRunningTime="2026-02-18 01:12:24.062578552 +0000 UTC m=+2817.439929514" Feb 18 01:12:28 crc kubenswrapper[4847]: E0218 01:12:28.407080 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:12:32 crc kubenswrapper[4847]: E0218 01:12:32.409098 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:12:39 crc kubenswrapper[4847]: I0218 01:12:39.406682 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:12:39 crc kubenswrapper[4847]: E0218 01:12:39.520289 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:12:39 crc kubenswrapper[4847]: E0218 01:12:39.520663 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:12:39 crc kubenswrapper[4847]: E0218 01:12:39.520784 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:12:39 crc kubenswrapper[4847]: E0218 01:12:39.521942 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:12:45 crc kubenswrapper[4847]: E0218 01:12:45.408283 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:12:50 crc kubenswrapper[4847]: E0218 01:12:50.406452 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:12:53 crc kubenswrapper[4847]: I0218 01:12:53.491322 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:12:53 crc kubenswrapper[4847]: I0218 01:12:53.492739 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:12:59 crc kubenswrapper[4847]: E0218 01:12:59.540965 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:12:59 crc kubenswrapper[4847]: E0218 01:12:59.541492 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:12:59 crc kubenswrapper[4847]: E0218 01:12:59.541681 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:12:59 crc kubenswrapper[4847]: E0218 01:12:59.543032 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:13:05 crc kubenswrapper[4847]: E0218 01:13:05.407966 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:13:14 crc kubenswrapper[4847]: E0218 01:13:14.408496 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:13:15 crc kubenswrapper[4847]: I0218 01:13:15.655765 4847 generic.go:334] "Generic (PLEG): container finished" podID="e6120fb7-f119-4597-86d5-8c75dcffac32" containerID="215cd27f560d11fa27342ea616672f43e8913536ba20a3b1b2c277391ba44880" exitCode=2 Feb 18 01:13:15 crc kubenswrapper[4847]: I0218 01:13:15.655849 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" event={"ID":"e6120fb7-f119-4597-86d5-8c75dcffac32","Type":"ContainerDied","Data":"215cd27f560d11fa27342ea616672f43e8913536ba20a3b1b2c277391ba44880"} Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.152816 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.272924 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273113 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273149 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273218 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273251 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273309 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98n8l\" (UniqueName: \"kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.273363 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2\") pod \"e6120fb7-f119-4597-86d5-8c75dcffac32\" (UID: \"e6120fb7-f119-4597-86d5-8c75dcffac32\") " Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.279621 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.280528 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l" (OuterVolumeSpecName: "kube-api-access-98n8l") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "kube-api-access-98n8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.303814 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.306007 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.311493 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.313438 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.320245 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory" (OuterVolumeSpecName: "inventory") pod "e6120fb7-f119-4597-86d5-8c75dcffac32" (UID: "e6120fb7-f119-4597-86d5-8c75dcffac32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375631 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375662 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98n8l\" (UniqueName: \"kubernetes.io/projected/e6120fb7-f119-4597-86d5-8c75dcffac32-kube-api-access-98n8l\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375674 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375685 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375694 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375702 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.375710 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6120fb7-f119-4597-86d5-8c75dcffac32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.691233 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" event={"ID":"e6120fb7-f119-4597-86d5-8c75dcffac32","Type":"ContainerDied","Data":"2c2cbbd4040d2d2ec9db526db5bb99176b1dbc4f4b1ebd52bba27190cad18aa8"} Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.691294 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2cbbd4040d2d2ec9db526db5bb99176b1dbc4f4b1ebd52bba27190cad18aa8" Feb 18 01:13:17 crc kubenswrapper[4847]: I0218 01:13:17.691376 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-h68tz" Feb 18 01:13:19 crc kubenswrapper[4847]: E0218 01:13:19.406274 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.492207 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.492913 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.492972 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.494006 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.494073 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" gracePeriod=600 Feb 18 01:13:23 crc kubenswrapper[4847]: E0218 01:13:23.629200 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.762737 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" exitCode=0 Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.762787 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073"} Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.763112 4847 scope.go:117] "RemoveContainer" containerID="2511ba7f8c9a7fd7741c7c7720bc448ebe3e64f3219d62946cd69e0f35a07fe2" Feb 18 01:13:23 crc kubenswrapper[4847]: I0218 01:13:23.764062 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:13:23 crc kubenswrapper[4847]: E0218 01:13:23.764540 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.353075 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:25 crc kubenswrapper[4847]: E0218 01:13:25.353834 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6120fb7-f119-4597-86d5-8c75dcffac32" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.353867 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6120fb7-f119-4597-86d5-8c75dcffac32" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.354329 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6120fb7-f119-4597-86d5-8c75dcffac32" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.357140 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.379493 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.472401 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.472458 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfj4\" (UniqueName: \"kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.473052 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.576054 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.576152 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfj4\" (UniqueName: \"kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.576377 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.576855 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.577028 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.598528 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfj4\" (UniqueName: \"kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4\") pod \"community-operators-rgjhx\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:25 crc kubenswrapper[4847]: I0218 01:13:25.685548 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:26 crc kubenswrapper[4847]: I0218 01:13:26.267474 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:26 crc kubenswrapper[4847]: W0218 01:13:26.275596 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9ace25a_a505_4524_b25c_72dfe2e17b53.slice/crio-088f0237ee752c3b0e214fb55d41290c1eb649fa93f79d79105c00a48c7097e0 WatchSource:0}: Error finding container 088f0237ee752c3b0e214fb55d41290c1eb649fa93f79d79105c00a48c7097e0: Status 404 returned error can't find the container with id 088f0237ee752c3b0e214fb55d41290c1eb649fa93f79d79105c00a48c7097e0 Feb 18 01:13:26 crc kubenswrapper[4847]: E0218 01:13:26.407636 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:13:26 crc kubenswrapper[4847]: I0218 01:13:26.818747 4847 generic.go:334] "Generic (PLEG): container finished" podID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerID="db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18" exitCode=0 Feb 18 01:13:26 crc kubenswrapper[4847]: I0218 01:13:26.819094 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerDied","Data":"db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18"} Feb 18 01:13:26 crc kubenswrapper[4847]: I0218 01:13:26.819133 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerStarted","Data":"088f0237ee752c3b0e214fb55d41290c1eb649fa93f79d79105c00a48c7097e0"} Feb 18 01:13:27 crc kubenswrapper[4847]: I0218 01:13:27.835069 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerStarted","Data":"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11"} Feb 18 01:13:29 crc kubenswrapper[4847]: I0218 01:13:29.856048 4847 generic.go:334] "Generic (PLEG): container finished" podID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerID="8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11" exitCode=0 Feb 18 01:13:29 crc kubenswrapper[4847]: I0218 01:13:29.856152 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerDied","Data":"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11"} Feb 18 01:13:30 crc kubenswrapper[4847]: I0218 01:13:30.872577 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerStarted","Data":"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6"} Feb 18 01:13:30 crc kubenswrapper[4847]: I0218 01:13:30.901278 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rgjhx" podStartSLOduration=2.452814263 podStartE2EDuration="5.901253328s" podCreationTimestamp="2026-02-18 01:13:25 +0000 UTC" firstStartedPulling="2026-02-18 01:13:26.821276326 +0000 UTC m=+2880.198627278" lastFinishedPulling="2026-02-18 01:13:30.269715361 +0000 UTC m=+2883.647066343" observedRunningTime="2026-02-18 01:13:30.897859983 +0000 UTC m=+2884.275210935" watchObservedRunningTime="2026-02-18 01:13:30.901253328 +0000 UTC m=+2884.278604290" Feb 18 01:13:31 crc kubenswrapper[4847]: E0218 01:13:31.407681 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:13:35 crc kubenswrapper[4847]: I0218 01:13:35.686411 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:35 crc kubenswrapper[4847]: I0218 01:13:35.687275 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:35 crc kubenswrapper[4847]: I0218 01:13:35.776310 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:36 crc kubenswrapper[4847]: I0218 01:13:36.001517 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:36 crc kubenswrapper[4847]: I0218 01:13:36.141195 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:37 crc kubenswrapper[4847]: I0218 01:13:37.978300 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rgjhx" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="registry-server" containerID="cri-o://30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6" gracePeriod=2 Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.405452 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:13:38 crc kubenswrapper[4847]: E0218 01:13:38.406021 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:13:38 crc kubenswrapper[4847]: E0218 01:13:38.407562 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.554940 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.594426 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxfj4\" (UniqueName: \"kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4\") pod \"a9ace25a-a505-4524-b25c-72dfe2e17b53\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.594680 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content\") pod \"a9ace25a-a505-4524-b25c-72dfe2e17b53\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.594760 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities\") pod \"a9ace25a-a505-4524-b25c-72dfe2e17b53\" (UID: \"a9ace25a-a505-4524-b25c-72dfe2e17b53\") " Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.596205 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities" (OuterVolumeSpecName: "utilities") pod "a9ace25a-a505-4524-b25c-72dfe2e17b53" (UID: "a9ace25a-a505-4524-b25c-72dfe2e17b53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.602257 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4" (OuterVolumeSpecName: "kube-api-access-qxfj4") pod "a9ace25a-a505-4524-b25c-72dfe2e17b53" (UID: "a9ace25a-a505-4524-b25c-72dfe2e17b53"). InnerVolumeSpecName "kube-api-access-qxfj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.659315 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9ace25a-a505-4524-b25c-72dfe2e17b53" (UID: "a9ace25a-a505-4524-b25c-72dfe2e17b53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.696595 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.696642 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9ace25a-a505-4524-b25c-72dfe2e17b53-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.696652 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxfj4\" (UniqueName: \"kubernetes.io/projected/a9ace25a-a505-4524-b25c-72dfe2e17b53-kube-api-access-qxfj4\") on node \"crc\" DevicePath \"\"" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.991225 4847 generic.go:334] "Generic (PLEG): container finished" podID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerID="30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6" exitCode=0 Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.991274 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerDied","Data":"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6"} Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.991306 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rgjhx" event={"ID":"a9ace25a-a505-4524-b25c-72dfe2e17b53","Type":"ContainerDied","Data":"088f0237ee752c3b0e214fb55d41290c1eb649fa93f79d79105c00a48c7097e0"} Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.991327 4847 scope.go:117] "RemoveContainer" containerID="30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6" Feb 18 01:13:38 crc kubenswrapper[4847]: I0218 01:13:38.991326 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rgjhx" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.017897 4847 scope.go:117] "RemoveContainer" containerID="8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.058542 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.071144 4847 scope.go:117] "RemoveContainer" containerID="db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.077379 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rgjhx"] Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.117190 4847 scope.go:117] "RemoveContainer" containerID="30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6" Feb 18 01:13:39 crc kubenswrapper[4847]: E0218 01:13:39.118367 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6\": container with ID starting with 30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6 not found: ID does not exist" containerID="30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.118457 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6"} err="failed to get container status \"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6\": rpc error: code = NotFound desc = could not find container \"30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6\": container with ID starting with 30fd66f9987402f3ac8929e4084fe234b250348f5f60c75937c14970a26956b6 not found: ID does not exist" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.118529 4847 scope.go:117] "RemoveContainer" containerID="8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11" Feb 18 01:13:39 crc kubenswrapper[4847]: E0218 01:13:39.119169 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11\": container with ID starting with 8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11 not found: ID does not exist" containerID="8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.119333 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11"} err="failed to get container status \"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11\": rpc error: code = NotFound desc = could not find container \"8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11\": container with ID starting with 8d96f429c6dbb37cfc52f7aa2cc03d7e5836b195b4ffde33651dc4d10a44fa11 not found: ID does not exist" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.119427 4847 scope.go:117] "RemoveContainer" containerID="db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18" Feb 18 01:13:39 crc kubenswrapper[4847]: E0218 01:13:39.120105 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18\": container with ID starting with db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18 not found: ID does not exist" containerID="db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.120176 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18"} err="failed to get container status \"db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18\": rpc error: code = NotFound desc = could not find container \"db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18\": container with ID starting with db8812fee4d6aae696eb5f82f6017786ca671e8341ea3df734e455aa5e3a5b18 not found: ID does not exist" Feb 18 01:13:39 crc kubenswrapper[4847]: I0218 01:13:39.428080 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" path="/var/lib/kubelet/pods/a9ace25a-a505-4524-b25c-72dfe2e17b53/volumes" Feb 18 01:13:44 crc kubenswrapper[4847]: E0218 01:13:44.406200 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:13:49 crc kubenswrapper[4847]: I0218 01:13:49.404579 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:13:49 crc kubenswrapper[4847]: E0218 01:13:49.406735 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:13:52 crc kubenswrapper[4847]: E0218 01:13:52.406937 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:13:56 crc kubenswrapper[4847]: E0218 01:13:56.407520 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:14:01 crc kubenswrapper[4847]: I0218 01:14:01.404842 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:14:01 crc kubenswrapper[4847]: E0218 01:14:01.406001 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:14:07 crc kubenswrapper[4847]: E0218 01:14:07.423364 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:14:10 crc kubenswrapper[4847]: E0218 01:14:10.406990 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:14:14 crc kubenswrapper[4847]: I0218 01:14:14.404870 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:14:14 crc kubenswrapper[4847]: E0218 01:14:14.405496 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:14:22 crc kubenswrapper[4847]: E0218 01:14:22.411662 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:14:23 crc kubenswrapper[4847]: E0218 01:14:23.407391 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:14:29 crc kubenswrapper[4847]: I0218 01:14:29.404778 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:14:29 crc kubenswrapper[4847]: E0218 01:14:29.405878 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:14:34 crc kubenswrapper[4847]: E0218 01:14:34.408370 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:14:36 crc kubenswrapper[4847]: E0218 01:14:36.407012 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:14:44 crc kubenswrapper[4847]: I0218 01:14:44.405068 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:14:44 crc kubenswrapper[4847]: E0218 01:14:44.406293 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:14:49 crc kubenswrapper[4847]: E0218 01:14:49.407190 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:14:49 crc kubenswrapper[4847]: E0218 01:14:49.407306 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:14:56 crc kubenswrapper[4847]: I0218 01:14:56.405195 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:14:56 crc kubenswrapper[4847]: E0218 01:14:56.406164 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.170990 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b"] Feb 18 01:15:00 crc kubenswrapper[4847]: E0218 01:15:00.172128 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="extract-utilities" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.172149 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="extract-utilities" Feb 18 01:15:00 crc kubenswrapper[4847]: E0218 01:15:00.172181 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.172189 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4847]: E0218 01:15:00.172213 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="extract-content" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.172221 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="extract-content" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.172472 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9ace25a-a505-4524-b25c-72dfe2e17b53" containerName="registry-server" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.173371 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.178081 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.179111 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.190119 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b"] Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.324197 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.324244 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.324266 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxbk4\" (UniqueName: \"kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.427306 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.427363 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.427397 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxbk4\" (UniqueName: \"kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.429059 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.436891 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.445482 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxbk4\" (UniqueName: \"kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4\") pod \"collect-profiles-29522955-2nc4b\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.498151 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:00 crc kubenswrapper[4847]: I0218 01:15:00.983765 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b"] Feb 18 01:15:01 crc kubenswrapper[4847]: I0218 01:15:01.110929 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" event={"ID":"6eeb9922-af86-4d31-8c27-2c32c5a6e178","Type":"ContainerStarted","Data":"1b7a7530560c8321113f5c6ed1d4a79b70ca1088af30bf471ccd31373fb7eb99"} Feb 18 01:15:02 crc kubenswrapper[4847]: I0218 01:15:02.126580 4847 generic.go:334] "Generic (PLEG): container finished" podID="6eeb9922-af86-4d31-8c27-2c32c5a6e178" containerID="c288a9e83d0ecb0b38df3eb2ed359301d6e0d77dc9d091276c7de97d439d8513" exitCode=0 Feb 18 01:15:02 crc kubenswrapper[4847]: I0218 01:15:02.126924 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" event={"ID":"6eeb9922-af86-4d31-8c27-2c32c5a6e178","Type":"ContainerDied","Data":"c288a9e83d0ecb0b38df3eb2ed359301d6e0d77dc9d091276c7de97d439d8513"} Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.594660 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.701796 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume\") pod \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.701868 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxbk4\" (UniqueName: \"kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4\") pod \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.701920 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume\") pod \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\" (UID: \"6eeb9922-af86-4d31-8c27-2c32c5a6e178\") " Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.703190 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume" (OuterVolumeSpecName: "config-volume") pod "6eeb9922-af86-4d31-8c27-2c32c5a6e178" (UID: "6eeb9922-af86-4d31-8c27-2c32c5a6e178"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.711302 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6eeb9922-af86-4d31-8c27-2c32c5a6e178" (UID: "6eeb9922-af86-4d31-8c27-2c32c5a6e178"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.711918 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4" (OuterVolumeSpecName: "kube-api-access-hxbk4") pod "6eeb9922-af86-4d31-8c27-2c32c5a6e178" (UID: "6eeb9922-af86-4d31-8c27-2c32c5a6e178"). InnerVolumeSpecName "kube-api-access-hxbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.805574 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6eeb9922-af86-4d31-8c27-2c32c5a6e178-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.805650 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxbk4\" (UniqueName: \"kubernetes.io/projected/6eeb9922-af86-4d31-8c27-2c32c5a6e178-kube-api-access-hxbk4\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:03 crc kubenswrapper[4847]: I0218 01:15:03.805677 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6eeb9922-af86-4d31-8c27-2c32c5a6e178-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:15:04 crc kubenswrapper[4847]: I0218 01:15:04.151205 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" event={"ID":"6eeb9922-af86-4d31-8c27-2c32c5a6e178","Type":"ContainerDied","Data":"1b7a7530560c8321113f5c6ed1d4a79b70ca1088af30bf471ccd31373fb7eb99"} Feb 18 01:15:04 crc kubenswrapper[4847]: I0218 01:15:04.151256 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b7a7530560c8321113f5c6ed1d4a79b70ca1088af30bf471ccd31373fb7eb99" Feb 18 01:15:04 crc kubenswrapper[4847]: I0218 01:15:04.151292 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b" Feb 18 01:15:04 crc kubenswrapper[4847]: E0218 01:15:04.406984 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:15:04 crc kubenswrapper[4847]: E0218 01:15:04.407437 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:15:04 crc kubenswrapper[4847]: I0218 01:15:04.712581 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t"] Feb 18 01:15:04 crc kubenswrapper[4847]: I0218 01:15:04.722056 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-6ss5t"] Feb 18 01:15:05 crc kubenswrapper[4847]: I0218 01:15:05.422336 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49dd2490-6e51-4d9b-afea-1f1c33f7fa21" path="/var/lib/kubelet/pods/49dd2490-6e51-4d9b-afea-1f1c33f7fa21/volumes" Feb 18 01:15:08 crc kubenswrapper[4847]: I0218 01:15:08.405084 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:15:08 crc kubenswrapper[4847]: E0218 01:15:08.406218 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:15 crc kubenswrapper[4847]: E0218 01:15:15.407323 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:15:19 crc kubenswrapper[4847]: I0218 01:15:19.404597 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:15:19 crc kubenswrapper[4847]: E0218 01:15:19.406149 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:19 crc kubenswrapper[4847]: E0218 01:15:19.406740 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:15:30 crc kubenswrapper[4847]: E0218 01:15:30.406057 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:15:30 crc kubenswrapper[4847]: E0218 01:15:30.406148 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:15:32 crc kubenswrapper[4847]: I0218 01:15:32.405623 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:15:32 crc kubenswrapper[4847]: E0218 01:15:32.406687 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:34 crc kubenswrapper[4847]: I0218 01:15:34.769387 4847 scope.go:117] "RemoveContainer" containerID="b609f841f279a2079f88822b82565fe4b882b95302e98fb98bfff804ece01769" Feb 18 01:15:34 crc kubenswrapper[4847]: I0218 01:15:34.798085 4847 scope.go:117] "RemoveContainer" containerID="ecb31791c7cd1a8965eeb3a23fc1367d1c6586e9187417a495e50daf827cfcd5" Feb 18 01:15:34 crc kubenswrapper[4847]: I0218 01:15:34.847720 4847 scope.go:117] "RemoveContainer" containerID="972ce2ed4c22f4f0813d6e375ae586437e3144a9e8de123375e3e40cd9a61ed9" Feb 18 01:15:34 crc kubenswrapper[4847]: I0218 01:15:34.870059 4847 scope.go:117] "RemoveContainer" containerID="a144e8bc1abbcd00d1a97e6da38dfc53673ed10bbc5ea3da79fa492e297224d6" Feb 18 01:15:44 crc kubenswrapper[4847]: I0218 01:15:44.405322 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:15:44 crc kubenswrapper[4847]: E0218 01:15:44.406817 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:45 crc kubenswrapper[4847]: E0218 01:15:45.409835 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:15:45 crc kubenswrapper[4847]: E0218 01:15:45.409881 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.045101 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z"] Feb 18 01:15:55 crc kubenswrapper[4847]: E0218 01:15:55.046075 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eeb9922-af86-4d31-8c27-2c32c5a6e178" containerName="collect-profiles" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.046101 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eeb9922-af86-4d31-8c27-2c32c5a6e178" containerName="collect-profiles" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.046405 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eeb9922-af86-4d31-8c27-2c32c5a6e178" containerName="collect-profiles" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.047294 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.049766 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.050560 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.050713 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.051872 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.052521 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.063554 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z"] Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139356 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139477 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9fx\" (UniqueName: \"kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139540 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139587 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139665 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139693 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.139883 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.241793 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242002 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242112 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242237 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs9fx\" (UniqueName: \"kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242361 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242441 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.242506 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.248726 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.249367 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.251092 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.251566 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.251831 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.256302 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.266673 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs9fx\" (UniqueName: \"kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.372438 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:15:55 crc kubenswrapper[4847]: I0218 01:15:55.989375 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z"] Feb 18 01:15:55 crc kubenswrapper[4847]: W0218 01:15:55.993881 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaf6af26_8056_47b5_9732_a0fc0f4680d6.slice/crio-5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c WatchSource:0}: Error finding container 5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c: Status 404 returned error can't find the container with id 5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c Feb 18 01:15:56 crc kubenswrapper[4847]: I0218 01:15:56.848182 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" event={"ID":"eaf6af26-8056-47b5-9732-a0fc0f4680d6","Type":"ContainerStarted","Data":"fedd39992167a4de3a64a3dacc312d5ebbea5e813c48516711e99586b1e5dfa3"} Feb 18 01:15:56 crc kubenswrapper[4847]: I0218 01:15:56.849191 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" event={"ID":"eaf6af26-8056-47b5-9732-a0fc0f4680d6","Type":"ContainerStarted","Data":"5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c"} Feb 18 01:15:56 crc kubenswrapper[4847]: I0218 01:15:56.871271 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" podStartSLOduration=1.440272624 podStartE2EDuration="1.871255454s" podCreationTimestamp="2026-02-18 01:15:55 +0000 UTC" firstStartedPulling="2026-02-18 01:15:55.998582246 +0000 UTC m=+3029.375933228" lastFinishedPulling="2026-02-18 01:15:56.429565076 +0000 UTC m=+3029.806916058" observedRunningTime="2026-02-18 01:15:56.864725961 +0000 UTC m=+3030.242076903" watchObservedRunningTime="2026-02-18 01:15:56.871255454 +0000 UTC m=+3030.248606396" Feb 18 01:15:57 crc kubenswrapper[4847]: I0218 01:15:57.421221 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:15:57 crc kubenswrapper[4847]: E0218 01:15:57.421816 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:15:59 crc kubenswrapper[4847]: E0218 01:15:59.406522 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:16:00 crc kubenswrapper[4847]: E0218 01:16:00.408209 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:16:08 crc kubenswrapper[4847]: I0218 01:16:08.404428 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:16:08 crc kubenswrapper[4847]: E0218 01:16:08.405489 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:16:11 crc kubenswrapper[4847]: E0218 01:16:11.408365 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:16:15 crc kubenswrapper[4847]: E0218 01:16:15.407712 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:16:21 crc kubenswrapper[4847]: I0218 01:16:21.405032 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:16:21 crc kubenswrapper[4847]: E0218 01:16:21.406201 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:16:24 crc kubenswrapper[4847]: E0218 01:16:24.408132 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:16:29 crc kubenswrapper[4847]: E0218 01:16:29.407326 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:16:32 crc kubenswrapper[4847]: I0218 01:16:32.404225 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:16:32 crc kubenswrapper[4847]: E0218 01:16:32.404827 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:16:36 crc kubenswrapper[4847]: E0218 01:16:36.407135 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:16:43 crc kubenswrapper[4847]: E0218 01:16:43.408219 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:16:46 crc kubenswrapper[4847]: I0218 01:16:46.404450 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:16:46 crc kubenswrapper[4847]: E0218 01:16:46.405941 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:16:47 crc kubenswrapper[4847]: I0218 01:16:47.499688 4847 generic.go:334] "Generic (PLEG): container finished" podID="eaf6af26-8056-47b5-9732-a0fc0f4680d6" containerID="fedd39992167a4de3a64a3dacc312d5ebbea5e813c48516711e99586b1e5dfa3" exitCode=2 Feb 18 01:16:47 crc kubenswrapper[4847]: I0218 01:16:47.499732 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" event={"ID":"eaf6af26-8056-47b5-9732-a0fc0f4680d6","Type":"ContainerDied","Data":"fedd39992167a4de3a64a3dacc312d5ebbea5e813c48516711e99586b1e5dfa3"} Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.001702 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145688 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145740 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145779 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs9fx\" (UniqueName: \"kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145835 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145852 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145887 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.145969 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle\") pod \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\" (UID: \"eaf6af26-8056-47b5-9732-a0fc0f4680d6\") " Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.152927 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.152974 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx" (OuterVolumeSpecName: "kube-api-access-bs9fx") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "kube-api-access-bs9fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.178805 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.179029 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.181672 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory" (OuterVolumeSpecName: "inventory") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.209275 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.211596 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "eaf6af26-8056-47b5-9732-a0fc0f4680d6" (UID: "eaf6af26-8056-47b5-9732-a0fc0f4680d6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249390 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249448 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249472 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs9fx\" (UniqueName: \"kubernetes.io/projected/eaf6af26-8056-47b5-9732-a0fc0f4680d6-kube-api-access-bs9fx\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249492 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249512 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249530 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.249547 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf6af26-8056-47b5-9732-a0fc0f4680d6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.536460 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" event={"ID":"eaf6af26-8056-47b5-9732-a0fc0f4680d6","Type":"ContainerDied","Data":"5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c"} Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.536507 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e617aaad9be4d7e0a428974fc5392797f71839b926feb3db5cd2a638e6e730c" Feb 18 01:16:49 crc kubenswrapper[4847]: I0218 01:16:49.536623 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z" Feb 18 01:16:51 crc kubenswrapper[4847]: E0218 01:16:51.407212 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:16:55 crc kubenswrapper[4847]: E0218 01:16:55.408974 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:16:58 crc kubenswrapper[4847]: I0218 01:16:58.405181 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:16:58 crc kubenswrapper[4847]: E0218 01:16:58.405784 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:17:04 crc kubenswrapper[4847]: E0218 01:17:04.409060 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:17:06 crc kubenswrapper[4847]: E0218 01:17:06.407400 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:17:12 crc kubenswrapper[4847]: I0218 01:17:12.404479 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:17:12 crc kubenswrapper[4847]: E0218 01:17:12.406696 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:17:19 crc kubenswrapper[4847]: E0218 01:17:19.407770 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:17:21 crc kubenswrapper[4847]: E0218 01:17:21.410421 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:17:26 crc kubenswrapper[4847]: I0218 01:17:26.404972 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:17:26 crc kubenswrapper[4847]: E0218 01:17:26.407330 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:17:33 crc kubenswrapper[4847]: E0218 01:17:33.408014 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:17:33 crc kubenswrapper[4847]: E0218 01:17:33.409313 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:17:39 crc kubenswrapper[4847]: I0218 01:17:39.405311 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:17:39 crc kubenswrapper[4847]: E0218 01:17:39.406698 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:17:44 crc kubenswrapper[4847]: E0218 01:17:44.409039 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:17:47 crc kubenswrapper[4847]: I0218 01:17:47.430280 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:17:47 crc kubenswrapper[4847]: E0218 01:17:47.561421 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:17:47 crc kubenswrapper[4847]: E0218 01:17:47.561486 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:17:47 crc kubenswrapper[4847]: E0218 01:17:47.561632 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:17:47 crc kubenswrapper[4847]: E0218 01:17:47.562903 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:17:54 crc kubenswrapper[4847]: I0218 01:17:54.405385 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:17:54 crc kubenswrapper[4847]: E0218 01:17:54.406788 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:17:56 crc kubenswrapper[4847]: E0218 01:17:56.408132 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:17:58 crc kubenswrapper[4847]: E0218 01:17:58.406002 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:18:07 crc kubenswrapper[4847]: I0218 01:18:07.413722 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:18:07 crc kubenswrapper[4847]: E0218 01:18:07.414761 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:18:11 crc kubenswrapper[4847]: E0218 01:18:11.560457 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:18:11 crc kubenswrapper[4847]: E0218 01:18:11.561075 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:18:11 crc kubenswrapper[4847]: E0218 01:18:11.561325 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:18:11 crc kubenswrapper[4847]: E0218 01:18:11.562681 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:18:12 crc kubenswrapper[4847]: E0218 01:18:12.407540 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:18:22 crc kubenswrapper[4847]: I0218 01:18:22.404673 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:18:22 crc kubenswrapper[4847]: E0218 01:18:22.405794 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:18:24 crc kubenswrapper[4847]: E0218 01:18:24.406593 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:18:26 crc kubenswrapper[4847]: E0218 01:18:26.414432 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:18:36 crc kubenswrapper[4847]: I0218 01:18:36.405389 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:18:36 crc kubenswrapper[4847]: E0218 01:18:36.408889 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:18:36 crc kubenswrapper[4847]: I0218 01:18:36.930269 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572"} Feb 18 01:18:39 crc kubenswrapper[4847]: E0218 01:18:39.411351 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:18:49 crc kubenswrapper[4847]: E0218 01:18:49.406631 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:18:50 crc kubenswrapper[4847]: E0218 01:18:50.406803 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:00 crc kubenswrapper[4847]: E0218 01:19:00.407073 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:19:01 crc kubenswrapper[4847]: E0218 01:19:01.407563 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:13 crc kubenswrapper[4847]: E0218 01:19:13.406203 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:19:15 crc kubenswrapper[4847]: E0218 01:19:15.409296 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:25 crc kubenswrapper[4847]: E0218 01:19:25.408318 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:19:26 crc kubenswrapper[4847]: E0218 01:19:26.407075 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.865936 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfbrd"] Feb 18 01:19:35 crc kubenswrapper[4847]: E0218 01:19:35.867237 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf6af26-8056-47b5-9732-a0fc0f4680d6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.867261 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf6af26-8056-47b5-9732-a0fc0f4680d6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.867656 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf6af26-8056-47b5-9732-a0fc0f4680d6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.870281 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.883527 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfbrd"] Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.896468 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5b2f\" (UniqueName: \"kubernetes.io/projected/aff962e0-6ef9-4a38-86ae-10c0a136da45-kube-api-access-g5b2f\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.896695 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-catalog-content\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.896727 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-utilities\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.998732 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-catalog-content\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.999089 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-utilities\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.999176 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5b2f\" (UniqueName: \"kubernetes.io/projected/aff962e0-6ef9-4a38-86ae-10c0a136da45-kube-api-access-g5b2f\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.999228 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-catalog-content\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:35 crc kubenswrapper[4847]: I0218 01:19:35.999646 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff962e0-6ef9-4a38-86ae-10c0a136da45-utilities\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:36 crc kubenswrapper[4847]: I0218 01:19:36.032134 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5b2f\" (UniqueName: \"kubernetes.io/projected/aff962e0-6ef9-4a38-86ae-10c0a136da45-kube-api-access-g5b2f\") pod \"certified-operators-sfbrd\" (UID: \"aff962e0-6ef9-4a38-86ae-10c0a136da45\") " pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:36 crc kubenswrapper[4847]: I0218 01:19:36.211705 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:36 crc kubenswrapper[4847]: I0218 01:19:36.818240 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfbrd"] Feb 18 01:19:36 crc kubenswrapper[4847]: W0218 01:19:36.821613 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff962e0_6ef9_4a38_86ae_10c0a136da45.slice/crio-3d98fde1a50e4bb6e684cc2b21de5aa27893a48f0af5adec544647d271c48780 WatchSource:0}: Error finding container 3d98fde1a50e4bb6e684cc2b21de5aa27893a48f0af5adec544647d271c48780: Status 404 returned error can't find the container with id 3d98fde1a50e4bb6e684cc2b21de5aa27893a48f0af5adec544647d271c48780 Feb 18 01:19:37 crc kubenswrapper[4847]: I0218 01:19:37.681672 4847 generic.go:334] "Generic (PLEG): container finished" podID="aff962e0-6ef9-4a38-86ae-10c0a136da45" containerID="391b4d119838317246eaef2e9dd299bb29e842b5a26533ea10cd4cbe2fd4f27b" exitCode=0 Feb 18 01:19:37 crc kubenswrapper[4847]: I0218 01:19:37.682860 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfbrd" event={"ID":"aff962e0-6ef9-4a38-86ae-10c0a136da45","Type":"ContainerDied","Data":"391b4d119838317246eaef2e9dd299bb29e842b5a26533ea10cd4cbe2fd4f27b"} Feb 18 01:19:37 crc kubenswrapper[4847]: I0218 01:19:37.683866 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfbrd" event={"ID":"aff962e0-6ef9-4a38-86ae-10c0a136da45","Type":"ContainerStarted","Data":"3d98fde1a50e4bb6e684cc2b21de5aa27893a48f0af5adec544647d271c48780"} Feb 18 01:19:39 crc kubenswrapper[4847]: E0218 01:19:39.406777 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:19:39 crc kubenswrapper[4847]: E0218 01:19:39.406788 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:43 crc kubenswrapper[4847]: I0218 01:19:43.858120 4847 generic.go:334] "Generic (PLEG): container finished" podID="aff962e0-6ef9-4a38-86ae-10c0a136da45" containerID="3067337ab8dc60014765f254400da70f63f58347a00905f26e087fa8bc7aa877" exitCode=0 Feb 18 01:19:43 crc kubenswrapper[4847]: I0218 01:19:43.858273 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfbrd" event={"ID":"aff962e0-6ef9-4a38-86ae-10c0a136da45","Type":"ContainerDied","Data":"3067337ab8dc60014765f254400da70f63f58347a00905f26e087fa8bc7aa877"} Feb 18 01:19:45 crc kubenswrapper[4847]: I0218 01:19:45.935338 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfbrd" event={"ID":"aff962e0-6ef9-4a38-86ae-10c0a136da45","Type":"ContainerStarted","Data":"560a83a9eda3527577f939a172e722ebd533cdcbd526eba34db0d1be0ccfcfad"} Feb 18 01:19:45 crc kubenswrapper[4847]: I0218 01:19:45.968920 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfbrd" podStartSLOduration=3.935034931 podStartE2EDuration="10.968905539s" podCreationTimestamp="2026-02-18 01:19:35 +0000 UTC" firstStartedPulling="2026-02-18 01:19:37.685235374 +0000 UTC m=+3251.062586316" lastFinishedPulling="2026-02-18 01:19:44.719105992 +0000 UTC m=+3258.096456924" observedRunningTime="2026-02-18 01:19:45.967146756 +0000 UTC m=+3259.344497698" watchObservedRunningTime="2026-02-18 01:19:45.968905539 +0000 UTC m=+3259.346256471" Feb 18 01:19:46 crc kubenswrapper[4847]: I0218 01:19:46.213101 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:46 crc kubenswrapper[4847]: I0218 01:19:46.213518 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:47 crc kubenswrapper[4847]: I0218 01:19:47.290180 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sfbrd" podUID="aff962e0-6ef9-4a38-86ae-10c0a136da45" containerName="registry-server" probeResult="failure" output=< Feb 18 01:19:47 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:19:47 crc kubenswrapper[4847]: > Feb 18 01:19:50 crc kubenswrapper[4847]: E0218 01:19:50.407902 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:19:52 crc kubenswrapper[4847]: E0218 01:19:52.407285 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:19:56 crc kubenswrapper[4847]: I0218 01:19:56.287852 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:56 crc kubenswrapper[4847]: I0218 01:19:56.358807 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfbrd" Feb 18 01:19:56 crc kubenswrapper[4847]: I0218 01:19:56.482923 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfbrd"] Feb 18 01:19:56 crc kubenswrapper[4847]: I0218 01:19:56.556409 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 01:19:56 crc kubenswrapper[4847]: I0218 01:19:56.556635 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vj5w5" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="registry-server" containerID="cri-o://e40a400304989e35321f4aa181b4a56f0d51368ba50e2e88a870865fd4ef951b" gracePeriod=2 Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.103424 4847 generic.go:334] "Generic (PLEG): container finished" podID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerID="e40a400304989e35321f4aa181b4a56f0d51368ba50e2e88a870865fd4ef951b" exitCode=0 Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.103483 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerDied","Data":"e40a400304989e35321f4aa181b4a56f0d51368ba50e2e88a870865fd4ef951b"} Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.104072 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vj5w5" event={"ID":"fe22ea9b-4ee3-46dd-afeb-803d41ac163b","Type":"ContainerDied","Data":"ab08392ea3145f258988e3425b3ff22cde75a8a99095490df9a37288c0087824"} Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.104084 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab08392ea3145f258988e3425b3ff22cde75a8a99095490df9a37288c0087824" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.157088 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.264204 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p4bl\" (UniqueName: \"kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl\") pod \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.264364 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities\") pod \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.264506 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content\") pod \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\" (UID: \"fe22ea9b-4ee3-46dd-afeb-803d41ac163b\") " Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.266188 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities" (OuterVolumeSpecName: "utilities") pod "fe22ea9b-4ee3-46dd-afeb-803d41ac163b" (UID: "fe22ea9b-4ee3-46dd-afeb-803d41ac163b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.271919 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl" (OuterVolumeSpecName: "kube-api-access-2p4bl") pod "fe22ea9b-4ee3-46dd-afeb-803d41ac163b" (UID: "fe22ea9b-4ee3-46dd-afeb-803d41ac163b"). InnerVolumeSpecName "kube-api-access-2p4bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.313840 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe22ea9b-4ee3-46dd-afeb-803d41ac163b" (UID: "fe22ea9b-4ee3-46dd-afeb-803d41ac163b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.367493 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.367532 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p4bl\" (UniqueName: \"kubernetes.io/projected/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-kube-api-access-2p4bl\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:57 crc kubenswrapper[4847]: I0218 01:19:57.367546 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe22ea9b-4ee3-46dd-afeb-803d41ac163b-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:19:58 crc kubenswrapper[4847]: I0218 01:19:58.114557 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vj5w5" Feb 18 01:19:58 crc kubenswrapper[4847]: I0218 01:19:58.144401 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 01:19:58 crc kubenswrapper[4847]: I0218 01:19:58.153816 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vj5w5"] Feb 18 01:19:59 crc kubenswrapper[4847]: I0218 01:19:59.419359 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" path="/var/lib/kubelet/pods/fe22ea9b-4ee3-46dd-afeb-803d41ac163b/volumes" Feb 18 01:20:04 crc kubenswrapper[4847]: E0218 01:20:04.416752 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:20:04 crc kubenswrapper[4847]: E0218 01:20:04.417757 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:20:15 crc kubenswrapper[4847]: E0218 01:20:15.407883 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.897032 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:16 crc kubenswrapper[4847]: E0218 01:20:16.897852 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="registry-server" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.897867 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="registry-server" Feb 18 01:20:16 crc kubenswrapper[4847]: E0218 01:20:16.897884 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="extract-content" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.897892 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="extract-content" Feb 18 01:20:16 crc kubenswrapper[4847]: E0218 01:20:16.897910 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="extract-utilities" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.897918 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="extract-utilities" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.898175 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe22ea9b-4ee3-46dd-afeb-803d41ac163b" containerName="registry-server" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.900235 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.928999 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.934279 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.934724 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:16 crc kubenswrapper[4847]: I0218 01:20:16.934811 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsgb\" (UniqueName: \"kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.036563 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.037103 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.037140 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.037162 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsgb\" (UniqueName: \"kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.037707 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.063396 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsgb\" (UniqueName: \"kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb\") pod \"redhat-operators-6d2f5\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.244080 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:17 crc kubenswrapper[4847]: E0218 01:20:17.415310 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:20:17 crc kubenswrapper[4847]: I0218 01:20:17.814171 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:18 crc kubenswrapper[4847]: I0218 01:20:18.389898 4847 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerID="315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29" exitCode=0 Feb 18 01:20:18 crc kubenswrapper[4847]: I0218 01:20:18.390143 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerDied","Data":"315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29"} Feb 18 01:20:18 crc kubenswrapper[4847]: I0218 01:20:18.390172 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerStarted","Data":"f3a0b070fd2321d50686b72880c5f3eb3df5f932df3f92bf7bb5028113a85065"} Feb 18 01:20:20 crc kubenswrapper[4847]: I0218 01:20:20.413906 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerStarted","Data":"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8"} Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.447553 4847 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerID="7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8" exitCode=0 Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.448039 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerDied","Data":"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8"} Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.479934 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.485595 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.497088 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.599997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.601635 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2s7\" (UniqueName: \"kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.601778 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.704433 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2s7\" (UniqueName: \"kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.704575 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.704824 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.705442 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.705689 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.725340 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2s7\" (UniqueName: \"kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7\") pod \"redhat-marketplace-mths5\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:23 crc kubenswrapper[4847]: I0218 01:20:23.843904 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:24 crc kubenswrapper[4847]: I0218 01:20:24.373476 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:24 crc kubenswrapper[4847]: I0218 01:20:24.460529 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerStarted","Data":"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13"} Feb 18 01:20:24 crc kubenswrapper[4847]: I0218 01:20:24.465212 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerStarted","Data":"dbad10f4e77a522ac34614760bfaffd9cb29f0412ee03e137b0cbb2a0c431952"} Feb 18 01:20:24 crc kubenswrapper[4847]: I0218 01:20:24.491250 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6d2f5" podStartSLOduration=3.038736138 podStartE2EDuration="8.491233294s" podCreationTimestamp="2026-02-18 01:20:16 +0000 UTC" firstStartedPulling="2026-02-18 01:20:18.39175549 +0000 UTC m=+3291.769106432" lastFinishedPulling="2026-02-18 01:20:23.844252636 +0000 UTC m=+3297.221603588" observedRunningTime="2026-02-18 01:20:24.488043404 +0000 UTC m=+3297.865394376" watchObservedRunningTime="2026-02-18 01:20:24.491233294 +0000 UTC m=+3297.868584256" Feb 18 01:20:25 crc kubenswrapper[4847]: I0218 01:20:25.474674 4847 generic.go:334] "Generic (PLEG): container finished" podID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerID="247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76" exitCode=0 Feb 18 01:20:25 crc kubenswrapper[4847]: I0218 01:20:25.474748 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerDied","Data":"247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76"} Feb 18 01:20:26 crc kubenswrapper[4847]: I0218 01:20:26.485828 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerStarted","Data":"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49"} Feb 18 01:20:27 crc kubenswrapper[4847]: I0218 01:20:27.244504 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:27 crc kubenswrapper[4847]: I0218 01:20:27.244554 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:27 crc kubenswrapper[4847]: I0218 01:20:27.499049 4847 generic.go:334] "Generic (PLEG): container finished" podID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerID="29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49" exitCode=0 Feb 18 01:20:27 crc kubenswrapper[4847]: I0218 01:20:27.499321 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerDied","Data":"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49"} Feb 18 01:20:28 crc kubenswrapper[4847]: I0218 01:20:28.301737 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6d2f5" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" probeResult="failure" output=< Feb 18 01:20:28 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:20:28 crc kubenswrapper[4847]: > Feb 18 01:20:28 crc kubenswrapper[4847]: E0218 01:20:28.405568 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:20:28 crc kubenswrapper[4847]: I0218 01:20:28.511986 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerStarted","Data":"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f"} Feb 18 01:20:30 crc kubenswrapper[4847]: E0218 01:20:30.406719 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:20:33 crc kubenswrapper[4847]: I0218 01:20:33.844377 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:33 crc kubenswrapper[4847]: I0218 01:20:33.845292 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:33 crc kubenswrapper[4847]: I0218 01:20:33.925514 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:33 crc kubenswrapper[4847]: I0218 01:20:33.960183 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mths5" podStartSLOduration=8.548876766 podStartE2EDuration="10.960160808s" podCreationTimestamp="2026-02-18 01:20:23 +0000 UTC" firstStartedPulling="2026-02-18 01:20:25.476499915 +0000 UTC m=+3298.853850847" lastFinishedPulling="2026-02-18 01:20:27.887783947 +0000 UTC m=+3301.265134889" observedRunningTime="2026-02-18 01:20:28.540288419 +0000 UTC m=+3301.917639381" watchObservedRunningTime="2026-02-18 01:20:33.960160808 +0000 UTC m=+3307.337511770" Feb 18 01:20:34 crc kubenswrapper[4847]: I0218 01:20:34.638841 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:34 crc kubenswrapper[4847]: I0218 01:20:34.688711 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:35 crc kubenswrapper[4847]: I0218 01:20:35.093360 4847 scope.go:117] "RemoveContainer" containerID="defa891a2176ab73ebb44edf5294eca525fd46b7634e7cc66f12e8363f9ba1e7" Feb 18 01:20:35 crc kubenswrapper[4847]: I0218 01:20:35.132860 4847 scope.go:117] "RemoveContainer" containerID="71db3d6812f687e6fa06303bf410c9b7ec3934fc26b103a360faad9c3f3fdda4" Feb 18 01:20:35 crc kubenswrapper[4847]: I0218 01:20:35.184340 4847 scope.go:117] "RemoveContainer" containerID="e40a400304989e35321f4aa181b4a56f0d51368ba50e2e88a870865fd4ef951b" Feb 18 01:20:36 crc kubenswrapper[4847]: I0218 01:20:36.606632 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mths5" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="registry-server" containerID="cri-o://4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f" gracePeriod=2 Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.123448 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.213357 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities\") pod \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.213510 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2s7\" (UniqueName: \"kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7\") pod \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.214860 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content\") pod \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\" (UID: \"ea9c7c39-a4e6-4699-93e6-b475832cd2f7\") " Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.215002 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities" (OuterVolumeSpecName: "utilities") pod "ea9c7c39-a4e6-4699-93e6-b475832cd2f7" (UID: "ea9c7c39-a4e6-4699-93e6-b475832cd2f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.215854 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.220218 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7" (OuterVolumeSpecName: "kube-api-access-nt2s7") pod "ea9c7c39-a4e6-4699-93e6-b475832cd2f7" (UID: "ea9c7c39-a4e6-4699-93e6-b475832cd2f7"). InnerVolumeSpecName "kube-api-access-nt2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.237119 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea9c7c39-a4e6-4699-93e6-b475832cd2f7" (UID: "ea9c7c39-a4e6-4699-93e6-b475832cd2f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.317750 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2s7\" (UniqueName: \"kubernetes.io/projected/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-kube-api-access-nt2s7\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.317784 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea9c7c39-a4e6-4699-93e6-b475832cd2f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.624315 4847 generic.go:334] "Generic (PLEG): container finished" podID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerID="4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f" exitCode=0 Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.624363 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerDied","Data":"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f"} Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.624393 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mths5" event={"ID":"ea9c7c39-a4e6-4699-93e6-b475832cd2f7","Type":"ContainerDied","Data":"dbad10f4e77a522ac34614760bfaffd9cb29f0412ee03e137b0cbb2a0c431952"} Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.624412 4847 scope.go:117] "RemoveContainer" containerID="4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.624424 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mths5" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.663991 4847 scope.go:117] "RemoveContainer" containerID="29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.670618 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.683541 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mths5"] Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.701951 4847 scope.go:117] "RemoveContainer" containerID="247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.784246 4847 scope.go:117] "RemoveContainer" containerID="4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f" Feb 18 01:20:37 crc kubenswrapper[4847]: E0218 01:20:37.785123 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f\": container with ID starting with 4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f not found: ID does not exist" containerID="4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.785177 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f"} err="failed to get container status \"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f\": rpc error: code = NotFound desc = could not find container \"4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f\": container with ID starting with 4818a1269f2493cb827f1a8cdcb06433c2311ad526bd01925ab4a5f96474ed6f not found: ID does not exist" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.785229 4847 scope.go:117] "RemoveContainer" containerID="29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49" Feb 18 01:20:37 crc kubenswrapper[4847]: E0218 01:20:37.785628 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49\": container with ID starting with 29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49 not found: ID does not exist" containerID="29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.785693 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49"} err="failed to get container status \"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49\": rpc error: code = NotFound desc = could not find container \"29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49\": container with ID starting with 29f75c4122e78111664102d3cb13e4a209ffff0cb4d8a75b2d98dfa62218bf49 not found: ID does not exist" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.785736 4847 scope.go:117] "RemoveContainer" containerID="247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76" Feb 18 01:20:37 crc kubenswrapper[4847]: E0218 01:20:37.786311 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76\": container with ID starting with 247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76 not found: ID does not exist" containerID="247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76" Feb 18 01:20:37 crc kubenswrapper[4847]: I0218 01:20:37.786357 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76"} err="failed to get container status \"247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76\": rpc error: code = NotFound desc = could not find container \"247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76\": container with ID starting with 247845f4a27a61d666b9e68b5ac0ddf62563bb7b8f0aed17ab609249c4019d76 not found: ID does not exist" Feb 18 01:20:38 crc kubenswrapper[4847]: I0218 01:20:38.315465 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6d2f5" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" probeResult="failure" output=< Feb 18 01:20:38 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:20:38 crc kubenswrapper[4847]: > Feb 18 01:20:39 crc kubenswrapper[4847]: E0218 01:20:39.411400 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:20:39 crc kubenswrapper[4847]: I0218 01:20:39.417736 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" path="/var/lib/kubelet/pods/ea9c7c39-a4e6-4699-93e6-b475832cd2f7/volumes" Feb 18 01:20:43 crc kubenswrapper[4847]: E0218 01:20:43.408318 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:20:47 crc kubenswrapper[4847]: I0218 01:20:47.328489 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:47 crc kubenswrapper[4847]: I0218 01:20:47.425985 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:48 crc kubenswrapper[4847]: I0218 01:20:48.089223 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:48 crc kubenswrapper[4847]: I0218 01:20:48.780865 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6d2f5" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" containerID="cri-o://014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13" gracePeriod=2 Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.257568 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.387150 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrsgb\" (UniqueName: \"kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb\") pod \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.387284 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content\") pod \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.387443 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities\") pod \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\" (UID: \"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7\") " Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.388462 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities" (OuterVolumeSpecName: "utilities") pod "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" (UID: "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.396294 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb" (OuterVolumeSpecName: "kube-api-access-rrsgb") pod "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" (UID: "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7"). InnerVolumeSpecName "kube-api-access-rrsgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.491021 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.491055 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrsgb\" (UniqueName: \"kubernetes.io/projected/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-kube-api-access-rrsgb\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.570477 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" (UID: "9bc9b897-7b90-473b-b1d2-a6e89c56fbf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.593586 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.789709 4847 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerID="014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13" exitCode=0 Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.789747 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerDied","Data":"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13"} Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.789754 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6d2f5" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.789772 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6d2f5" event={"ID":"9bc9b897-7b90-473b-b1d2-a6e89c56fbf7","Type":"ContainerDied","Data":"f3a0b070fd2321d50686b72880c5f3eb3df5f932df3f92bf7bb5028113a85065"} Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.789791 4847 scope.go:117] "RemoveContainer" containerID="014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.835915 4847 scope.go:117] "RemoveContainer" containerID="7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.838212 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.847743 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6d2f5"] Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.867318 4847 scope.go:117] "RemoveContainer" containerID="315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.919518 4847 scope.go:117] "RemoveContainer" containerID="014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13" Feb 18 01:20:49 crc kubenswrapper[4847]: E0218 01:20:49.920438 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13\": container with ID starting with 014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13 not found: ID does not exist" containerID="014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.920486 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13"} err="failed to get container status \"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13\": rpc error: code = NotFound desc = could not find container \"014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13\": container with ID starting with 014935e7d9e554b77687baf9b65a6bfad53ad27c18e5d4bf4951bde008558b13 not found: ID does not exist" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.920514 4847 scope.go:117] "RemoveContainer" containerID="7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8" Feb 18 01:20:49 crc kubenswrapper[4847]: E0218 01:20:49.920947 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8\": container with ID starting with 7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8 not found: ID does not exist" containerID="7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.920972 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8"} err="failed to get container status \"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8\": rpc error: code = NotFound desc = could not find container \"7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8\": container with ID starting with 7fa30272d078e9980ca86580891c357c1bac46f3cb079aed9473608bae90f7b8 not found: ID does not exist" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.920986 4847 scope.go:117] "RemoveContainer" containerID="315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29" Feb 18 01:20:49 crc kubenswrapper[4847]: E0218 01:20:49.921398 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29\": container with ID starting with 315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29 not found: ID does not exist" containerID="315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29" Feb 18 01:20:49 crc kubenswrapper[4847]: I0218 01:20:49.921421 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29"} err="failed to get container status \"315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29\": rpc error: code = NotFound desc = could not find container \"315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29\": container with ID starting with 315f39684fa3467ce55ca3b79b7a1ce75726485abc38703585315e046934cc29 not found: ID does not exist" Feb 18 01:20:50 crc kubenswrapper[4847]: E0218 01:20:50.051714 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc9b897_7b90_473b_b1d2_a6e89c56fbf7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc9b897_7b90_473b_b1d2_a6e89c56fbf7.slice/crio-f3a0b070fd2321d50686b72880c5f3eb3df5f932df3f92bf7bb5028113a85065\": RecentStats: unable to find data in memory cache]" Feb 18 01:20:51 crc kubenswrapper[4847]: I0218 01:20:51.417281 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" path="/var/lib/kubelet/pods/9bc9b897-7b90-473b-b1d2-a6e89c56fbf7/volumes" Feb 18 01:20:52 crc kubenswrapper[4847]: E0218 01:20:52.408086 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:20:53 crc kubenswrapper[4847]: I0218 01:20:53.492348 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:20:53 crc kubenswrapper[4847]: I0218 01:20:53.494002 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:20:58 crc kubenswrapper[4847]: E0218 01:20:58.407055 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:21:04 crc kubenswrapper[4847]: E0218 01:21:04.409113 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:21:09 crc kubenswrapper[4847]: E0218 01:21:09.407995 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:21:16 crc kubenswrapper[4847]: E0218 01:21:16.407935 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:21:22 crc kubenswrapper[4847]: E0218 01:21:22.407031 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:21:23 crc kubenswrapper[4847]: I0218 01:21:23.496077 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:21:23 crc kubenswrapper[4847]: I0218 01:21:23.496171 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:21:30 crc kubenswrapper[4847]: E0218 01:21:30.407969 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:21:35 crc kubenswrapper[4847]: E0218 01:21:35.408059 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:21:44 crc kubenswrapper[4847]: E0218 01:21:44.408336 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:21:46 crc kubenswrapper[4847]: E0218 01:21:46.407582 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.492417 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.493360 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.493457 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.495727 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.495881 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572" gracePeriod=600 Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.692797 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572" exitCode=0 Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.692898 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572"} Feb 18 01:21:53 crc kubenswrapper[4847]: I0218 01:21:53.693331 4847 scope.go:117] "RemoveContainer" containerID="c5c4fd3b8dcb1cdbd89866a065a6dfda5c7b9ac2d404168f703b082dcdd6d073" Feb 18 01:21:54 crc kubenswrapper[4847]: I0218 01:21:54.712023 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c"} Feb 18 01:21:55 crc kubenswrapper[4847]: E0218 01:21:55.411333 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:21:57 crc kubenswrapper[4847]: E0218 01:21:57.420644 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.108915 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd"] Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.109873 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="extract-utilities" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.109889 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="extract-utilities" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.109906 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="extract-content" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.109913 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="extract-content" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.109933 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.109941 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.109955 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.109963 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.109987 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="extract-utilities" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.109994 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="extract-utilities" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.110025 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="extract-content" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.110032 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="extract-content" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.110264 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9c7c39-a4e6-4699-93e6-b475832cd2f7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.110286 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc9b897-7b90-473b-b1d2-a6e89c56fbf7" containerName="registry-server" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.111308 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.117264 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.117296 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xzl9l" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.117270 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.117348 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.117440 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.130309 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd"] Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.153997 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154124 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154302 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154356 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154446 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154574 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.154650 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2vk\" (UniqueName: \"kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255396 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255443 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255481 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255534 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255558 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2vk\" (UniqueName: \"kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255632 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.255650 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.261347 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.261698 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.261980 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.262327 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.262429 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.268976 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.276255 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2vk\" (UniqueName: \"kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-75hsd\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:07 crc kubenswrapper[4847]: E0218 01:22:07.415660 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:22:07 crc kubenswrapper[4847]: I0218 01:22:07.442456 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:22:08 crc kubenswrapper[4847]: I0218 01:22:08.019278 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd"] Feb 18 01:22:08 crc kubenswrapper[4847]: W0218 01:22:08.039182 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27e2b216_c7e0_48cb_8fbe_1b286c6ca6c9.slice/crio-130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97 WatchSource:0}: Error finding container 130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97: Status 404 returned error can't find the container with id 130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97 Feb 18 01:22:08 crc kubenswrapper[4847]: I0218 01:22:08.881320 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" event={"ID":"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9","Type":"ContainerStarted","Data":"ff5b1990b402f26625e802c8cf16e4df97799fac693737e2e5ecb7ea6b2ee7f4"} Feb 18 01:22:08 crc kubenswrapper[4847]: I0218 01:22:08.883410 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" event={"ID":"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9","Type":"ContainerStarted","Data":"130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97"} Feb 18 01:22:08 crc kubenswrapper[4847]: I0218 01:22:08.904842 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" podStartSLOduration=1.504601363 podStartE2EDuration="1.904815668s" podCreationTimestamp="2026-02-18 01:22:07 +0000 UTC" firstStartedPulling="2026-02-18 01:22:08.045145451 +0000 UTC m=+3401.422496403" lastFinishedPulling="2026-02-18 01:22:08.445359726 +0000 UTC m=+3401.822710708" observedRunningTime="2026-02-18 01:22:08.900470828 +0000 UTC m=+3402.277821810" watchObservedRunningTime="2026-02-18 01:22:08.904815668 +0000 UTC m=+3402.282166650" Feb 18 01:22:12 crc kubenswrapper[4847]: E0218 01:22:12.408337 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:22:20 crc kubenswrapper[4847]: E0218 01:22:20.408203 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:22:25 crc kubenswrapper[4847]: E0218 01:22:25.408029 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:22:33 crc kubenswrapper[4847]: E0218 01:22:33.405706 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:22:38 crc kubenswrapper[4847]: E0218 01:22:38.409966 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:22:44 crc kubenswrapper[4847]: E0218 01:22:44.408807 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:22:53 crc kubenswrapper[4847]: I0218 01:22:53.409650 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:22:53 crc kubenswrapper[4847]: E0218 01:22:53.533283 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:22:53 crc kubenswrapper[4847]: E0218 01:22:53.533398 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:22:53 crc kubenswrapper[4847]: E0218 01:22:53.533714 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:22:53 crc kubenswrapper[4847]: E0218 01:22:53.535392 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:22:58 crc kubenswrapper[4847]: E0218 01:22:58.407443 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:23:02 crc kubenswrapper[4847]: I0218 01:23:02.595786 4847 generic.go:334] "Generic (PLEG): container finished" podID="27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" containerID="ff5b1990b402f26625e802c8cf16e4df97799fac693737e2e5ecb7ea6b2ee7f4" exitCode=2 Feb 18 01:23:02 crc kubenswrapper[4847]: I0218 01:23:02.595860 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" event={"ID":"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9","Type":"ContainerDied","Data":"ff5b1990b402f26625e802c8cf16e4df97799fac693737e2e5ecb7ea6b2ee7f4"} Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.120184 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.242661 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.242933 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.242961 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k2vk\" (UniqueName: \"kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.243114 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.243253 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.243289 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.243307 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam\") pod \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\" (UID: \"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9\") " Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.251540 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.252576 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk" (OuterVolumeSpecName: "kube-api-access-8k2vk") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "kube-api-access-8k2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.278362 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory" (OuterVolumeSpecName: "inventory") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.280320 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.284264 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.285101 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.306440 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" (UID: "27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346350 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346391 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346460 4847 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346480 4847 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-inventory\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346519 4847 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346535 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k2vk\" (UniqueName: \"kubernetes.io/projected/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-kube-api-access-8k2vk\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.346546 4847 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.631013 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" event={"ID":"27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9","Type":"ContainerDied","Data":"130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97"} Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.631100 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130ec25bc35371843c0a2b67387a5528ca4e6664e83b00bea1631358cc1d9f97" Feb 18 01:23:04 crc kubenswrapper[4847]: I0218 01:23:04.631113 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-75hsd" Feb 18 01:23:07 crc kubenswrapper[4847]: E0218 01:23:07.425202 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:23:10 crc kubenswrapper[4847]: E0218 01:23:10.408950 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:23:22 crc kubenswrapper[4847]: E0218 01:23:22.406723 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:23:25 crc kubenswrapper[4847]: E0218 01:23:25.579670 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:23:25 crc kubenswrapper[4847]: E0218 01:23:25.580348 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:23:25 crc kubenswrapper[4847]: E0218 01:23:25.580562 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:23:25 crc kubenswrapper[4847]: E0218 01:23:25.581876 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:23:37 crc kubenswrapper[4847]: E0218 01:23:37.429770 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:23:38 crc kubenswrapper[4847]: E0218 01:23:38.407585 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.681396 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:42 crc kubenswrapper[4847]: E0218 01:23:42.682328 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.682351 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.682793 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.685341 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.696885 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.769960 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.770507 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48dg\" (UniqueName: \"kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.770570 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.872473 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48dg\" (UniqueName: \"kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.872547 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.872697 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.873366 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.873383 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:42 crc kubenswrapper[4847]: I0218 01:23:42.898671 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48dg\" (UniqueName: \"kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg\") pod \"community-operators-9dqx4\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:43 crc kubenswrapper[4847]: I0218 01:23:43.012219 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:43 crc kubenswrapper[4847]: W0218 01:23:43.606568 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66478b16_41f8_497e_88ba_bdd100e1e33a.slice/crio-8119badd85d153d3d3fe8cb2d0c721636bbbe0bd4562cc79b0d980488918fd1b WatchSource:0}: Error finding container 8119badd85d153d3d3fe8cb2d0c721636bbbe0bd4562cc79b0d980488918fd1b: Status 404 returned error can't find the container with id 8119badd85d153d3d3fe8cb2d0c721636bbbe0bd4562cc79b0d980488918fd1b Feb 18 01:23:43 crc kubenswrapper[4847]: I0218 01:23:43.613933 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:44 crc kubenswrapper[4847]: I0218 01:23:44.114567 4847 generic.go:334] "Generic (PLEG): container finished" podID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerID="b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce" exitCode=0 Feb 18 01:23:44 crc kubenswrapper[4847]: I0218 01:23:44.114662 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerDied","Data":"b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce"} Feb 18 01:23:44 crc kubenswrapper[4847]: I0218 01:23:44.115022 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerStarted","Data":"8119badd85d153d3d3fe8cb2d0c721636bbbe0bd4562cc79b0d980488918fd1b"} Feb 18 01:23:45 crc kubenswrapper[4847]: I0218 01:23:45.126031 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerStarted","Data":"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84"} Feb 18 01:23:46 crc kubenswrapper[4847]: I0218 01:23:46.143955 4847 generic.go:334] "Generic (PLEG): container finished" podID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerID="fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84" exitCode=0 Feb 18 01:23:46 crc kubenswrapper[4847]: I0218 01:23:46.144197 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerDied","Data":"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84"} Feb 18 01:23:47 crc kubenswrapper[4847]: I0218 01:23:47.166262 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerStarted","Data":"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8"} Feb 18 01:23:47 crc kubenswrapper[4847]: I0218 01:23:47.205973 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9dqx4" podStartSLOduration=2.804885535 podStartE2EDuration="5.205949298s" podCreationTimestamp="2026-02-18 01:23:42 +0000 UTC" firstStartedPulling="2026-02-18 01:23:44.117517662 +0000 UTC m=+3497.494868634" lastFinishedPulling="2026-02-18 01:23:46.518581445 +0000 UTC m=+3499.895932397" observedRunningTime="2026-02-18 01:23:47.200028148 +0000 UTC m=+3500.577379100" watchObservedRunningTime="2026-02-18 01:23:47.205949298 +0000 UTC m=+3500.583300260" Feb 18 01:23:51 crc kubenswrapper[4847]: E0218 01:23:51.407442 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:23:52 crc kubenswrapper[4847]: E0218 01:23:52.406230 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.012498 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.012882 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.081527 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.328992 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.439253 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.491523 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:23:53 crc kubenswrapper[4847]: I0218 01:23:53.491590 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.268447 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9dqx4" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="registry-server" containerID="cri-o://bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8" gracePeriod=2 Feb 18 01:23:55 crc kubenswrapper[4847]: E0218 01:23:55.544665 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66478b16_41f8_497e_88ba_bdd100e1e33a.slice/crio-conmon-bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66478b16_41f8_497e_88ba_bdd100e1e33a.slice/crio-bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8.scope\": RecentStats: unable to find data in memory cache]" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.857111 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.882173 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p48dg\" (UniqueName: \"kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg\") pod \"66478b16-41f8-497e-88ba-bdd100e1e33a\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.882431 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content\") pod \"66478b16-41f8-497e-88ba-bdd100e1e33a\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.882522 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities\") pod \"66478b16-41f8-497e-88ba-bdd100e1e33a\" (UID: \"66478b16-41f8-497e-88ba-bdd100e1e33a\") " Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.883829 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities" (OuterVolumeSpecName: "utilities") pod "66478b16-41f8-497e-88ba-bdd100e1e33a" (UID: "66478b16-41f8-497e-88ba-bdd100e1e33a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.893338 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg" (OuterVolumeSpecName: "kube-api-access-p48dg") pod "66478b16-41f8-497e-88ba-bdd100e1e33a" (UID: "66478b16-41f8-497e-88ba-bdd100e1e33a"). InnerVolumeSpecName "kube-api-access-p48dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.953375 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66478b16-41f8-497e-88ba-bdd100e1e33a" (UID: "66478b16-41f8-497e-88ba-bdd100e1e33a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.985094 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.985136 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66478b16-41f8-497e-88ba-bdd100e1e33a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:55 crc kubenswrapper[4847]: I0218 01:23:55.985148 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p48dg\" (UniqueName: \"kubernetes.io/projected/66478b16-41f8-497e-88ba-bdd100e1e33a-kube-api-access-p48dg\") on node \"crc\" DevicePath \"\"" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.280410 4847 generic.go:334] "Generic (PLEG): container finished" podID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerID="bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8" exitCode=0 Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.280451 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerDied","Data":"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8"} Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.280478 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dqx4" event={"ID":"66478b16-41f8-497e-88ba-bdd100e1e33a","Type":"ContainerDied","Data":"8119badd85d153d3d3fe8cb2d0c721636bbbe0bd4562cc79b0d980488918fd1b"} Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.280495 4847 scope.go:117] "RemoveContainer" containerID="bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.280518 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dqx4" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.303447 4847 scope.go:117] "RemoveContainer" containerID="fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.323542 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.337660 4847 scope.go:117] "RemoveContainer" containerID="b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.339742 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9dqx4"] Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.375293 4847 scope.go:117] "RemoveContainer" containerID="bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8" Feb 18 01:23:56 crc kubenswrapper[4847]: E0218 01:23:56.375720 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8\": container with ID starting with bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8 not found: ID does not exist" containerID="bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.375748 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8"} err="failed to get container status \"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8\": rpc error: code = NotFound desc = could not find container \"bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8\": container with ID starting with bd0035bcf632281284612230183accf55f779d67684843cd28d3d23abdefd5f8 not found: ID does not exist" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.375772 4847 scope.go:117] "RemoveContainer" containerID="fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84" Feb 18 01:23:56 crc kubenswrapper[4847]: E0218 01:23:56.375983 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84\": container with ID starting with fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84 not found: ID does not exist" containerID="fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.376002 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84"} err="failed to get container status \"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84\": rpc error: code = NotFound desc = could not find container \"fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84\": container with ID starting with fbc6ffd1391c873d18b755308eae8f80d7d18cd11710cadd551da4a6b5dd3b84 not found: ID does not exist" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.376014 4847 scope.go:117] "RemoveContainer" containerID="b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce" Feb 18 01:23:56 crc kubenswrapper[4847]: E0218 01:23:56.376235 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce\": container with ID starting with b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce not found: ID does not exist" containerID="b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce" Feb 18 01:23:56 crc kubenswrapper[4847]: I0218 01:23:56.376277 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce"} err="failed to get container status \"b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce\": rpc error: code = NotFound desc = could not find container \"b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce\": container with ID starting with b14b18c5097601179c98edb532dfb71a6bc0a386f663904d9766fb731524f5ce not found: ID does not exist" Feb 18 01:23:57 crc kubenswrapper[4847]: I0218 01:23:57.426880 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" path="/var/lib/kubelet/pods/66478b16-41f8-497e-88ba-bdd100e1e33a/volumes" Feb 18 01:24:03 crc kubenswrapper[4847]: E0218 01:24:03.406774 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:24:06 crc kubenswrapper[4847]: E0218 01:24:06.407982 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:24:17 crc kubenswrapper[4847]: E0218 01:24:17.416963 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:24:17 crc kubenswrapper[4847]: E0218 01:24:17.418399 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:24:23 crc kubenswrapper[4847]: I0218 01:24:23.491695 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:24:23 crc kubenswrapper[4847]: I0218 01:24:23.492452 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:24:28 crc kubenswrapper[4847]: E0218 01:24:28.407182 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:24:31 crc kubenswrapper[4847]: E0218 01:24:31.407479 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:24:41 crc kubenswrapper[4847]: E0218 01:24:41.408534 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:24:46 crc kubenswrapper[4847]: E0218 01:24:46.406740 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.492121 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.492884 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.492952 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.494128 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.494226 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" gracePeriod=600 Feb 18 01:24:53 crc kubenswrapper[4847]: E0218 01:24:53.627021 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.983460 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" exitCode=0 Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.983557 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c"} Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.983899 4847 scope.go:117] "RemoveContainer" containerID="154c57adabbf819120d699fd0ee78eee9784a12a52f2c8bd23bd6b6288227572" Feb 18 01:24:53 crc kubenswrapper[4847]: I0218 01:24:53.984789 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:24:53 crc kubenswrapper[4847]: E0218 01:24:53.985146 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:24:55 crc kubenswrapper[4847]: E0218 01:24:55.405667 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:24:59 crc kubenswrapper[4847]: E0218 01:24:59.408117 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:25:07 crc kubenswrapper[4847]: I0218 01:25:07.414801 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:25:07 crc kubenswrapper[4847]: E0218 01:25:07.415769 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:25:09 crc kubenswrapper[4847]: E0218 01:25:09.408948 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:25:14 crc kubenswrapper[4847]: E0218 01:25:14.406494 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:25:20 crc kubenswrapper[4847]: I0218 01:25:20.404963 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:25:20 crc kubenswrapper[4847]: E0218 01:25:20.405991 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:25:22 crc kubenswrapper[4847]: E0218 01:25:22.407106 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:25:25 crc kubenswrapper[4847]: E0218 01:25:25.410498 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:25:34 crc kubenswrapper[4847]: I0218 01:25:34.405830 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:25:34 crc kubenswrapper[4847]: E0218 01:25:34.406956 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:25:36 crc kubenswrapper[4847]: E0218 01:25:36.407871 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:25:37 crc kubenswrapper[4847]: E0218 01:25:37.423268 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:25:45 crc kubenswrapper[4847]: I0218 01:25:45.404846 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:25:45 crc kubenswrapper[4847]: E0218 01:25:45.407673 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:25:50 crc kubenswrapper[4847]: E0218 01:25:50.408666 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:25:50 crc kubenswrapper[4847]: E0218 01:25:50.408716 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:25:56 crc kubenswrapper[4847]: I0218 01:25:56.405827 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:25:56 crc kubenswrapper[4847]: E0218 01:25:56.407056 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:26:01 crc kubenswrapper[4847]: E0218 01:26:01.410266 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:26:04 crc kubenswrapper[4847]: E0218 01:26:04.406664 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:26:11 crc kubenswrapper[4847]: I0218 01:26:11.405413 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:26:11 crc kubenswrapper[4847]: E0218 01:26:11.406540 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:26:12 crc kubenswrapper[4847]: E0218 01:26:12.408282 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:26:19 crc kubenswrapper[4847]: E0218 01:26:19.413349 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:26:24 crc kubenswrapper[4847]: E0218 01:26:24.408023 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:26:25 crc kubenswrapper[4847]: I0218 01:26:25.405368 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:26:25 crc kubenswrapper[4847]: E0218 01:26:25.406195 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:26:31 crc kubenswrapper[4847]: E0218 01:26:31.408115 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:26:36 crc kubenswrapper[4847]: I0218 01:26:36.405457 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:26:36 crc kubenswrapper[4847]: E0218 01:26:36.406575 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:26:39 crc kubenswrapper[4847]: E0218 01:26:39.409474 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:26:44 crc kubenswrapper[4847]: E0218 01:26:44.410186 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:26:48 crc kubenswrapper[4847]: I0218 01:26:48.403978 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:26:48 crc kubenswrapper[4847]: E0218 01:26:48.406040 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:26:53 crc kubenswrapper[4847]: E0218 01:26:53.407099 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:26:56 crc kubenswrapper[4847]: E0218 01:26:56.407748 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:27:02 crc kubenswrapper[4847]: I0218 01:27:02.405458 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:27:02 crc kubenswrapper[4847]: E0218 01:27:02.406480 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:27:04 crc kubenswrapper[4847]: E0218 01:27:04.407700 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:27:10 crc kubenswrapper[4847]: E0218 01:27:10.409717 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:27:13 crc kubenswrapper[4847]: I0218 01:27:13.404876 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:27:13 crc kubenswrapper[4847]: E0218 01:27:13.405931 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:27:16 crc kubenswrapper[4847]: E0218 01:27:16.408319 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:27:24 crc kubenswrapper[4847]: I0218 01:27:24.405883 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:27:24 crc kubenswrapper[4847]: E0218 01:27:24.406996 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:27:24 crc kubenswrapper[4847]: E0218 01:27:24.409082 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:27:31 crc kubenswrapper[4847]: E0218 01:27:31.407873 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:27:36 crc kubenswrapper[4847]: E0218 01:27:36.413731 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:27:39 crc kubenswrapper[4847]: I0218 01:27:39.405180 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:27:39 crc kubenswrapper[4847]: E0218 01:27:39.406146 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:27:46 crc kubenswrapper[4847]: E0218 01:27:46.408119 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:27:50 crc kubenswrapper[4847]: E0218 01:27:50.409058 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:27:54 crc kubenswrapper[4847]: I0218 01:27:54.404663 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:27:54 crc kubenswrapper[4847]: E0218 01:27:54.405594 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:27:58 crc kubenswrapper[4847]: I0218 01:27:58.409272 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:27:58 crc kubenswrapper[4847]: E0218 01:27:58.546834 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:27:58 crc kubenswrapper[4847]: E0218 01:27:58.547286 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:27:58 crc kubenswrapper[4847]: E0218 01:27:58.547454 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:27:58 crc kubenswrapper[4847]: E0218 01:27:58.549402 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:28:03 crc kubenswrapper[4847]: E0218 01:28:03.409006 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:28:08 crc kubenswrapper[4847]: I0218 01:28:08.405271 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:28:08 crc kubenswrapper[4847]: E0218 01:28:08.406455 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:28:11 crc kubenswrapper[4847]: E0218 01:28:11.407636 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:28:18 crc kubenswrapper[4847]: E0218 01:28:18.407659 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:28:22 crc kubenswrapper[4847]: I0218 01:28:22.405112 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:28:22 crc kubenswrapper[4847]: E0218 01:28:22.406440 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:28:26 crc kubenswrapper[4847]: E0218 01:28:26.407746 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:28:32 crc kubenswrapper[4847]: E0218 01:28:32.541299 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:28:32 crc kubenswrapper[4847]: E0218 01:28:32.541980 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:28:32 crc kubenswrapper[4847]: E0218 01:28:32.542162 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:28:32 crc kubenswrapper[4847]: E0218 01:28:32.543352 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:28:37 crc kubenswrapper[4847]: I0218 01:28:37.409806 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:28:37 crc kubenswrapper[4847]: E0218 01:28:37.410947 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:28:38 crc kubenswrapper[4847]: E0218 01:28:38.407735 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:28:46 crc kubenswrapper[4847]: E0218 01:28:46.409673 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:28:48 crc kubenswrapper[4847]: I0218 01:28:48.404883 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:28:48 crc kubenswrapper[4847]: E0218 01:28:48.405450 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:28:51 crc kubenswrapper[4847]: E0218 01:28:51.408008 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:01 crc kubenswrapper[4847]: E0218 01:29:01.407072 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:29:02 crc kubenswrapper[4847]: I0218 01:29:02.404768 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:29:02 crc kubenswrapper[4847]: E0218 01:29:02.405436 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:29:05 crc kubenswrapper[4847]: E0218 01:29:05.406557 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:13 crc kubenswrapper[4847]: E0218 01:29:13.407154 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:29:16 crc kubenswrapper[4847]: E0218 01:29:16.407898 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:17 crc kubenswrapper[4847]: I0218 01:29:17.424564 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:29:17 crc kubenswrapper[4847]: E0218 01:29:17.425071 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:29:25 crc kubenswrapper[4847]: E0218 01:29:25.408367 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:29:30 crc kubenswrapper[4847]: E0218 01:29:30.407819 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:32 crc kubenswrapper[4847]: I0218 01:29:32.404834 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:29:32 crc kubenswrapper[4847]: E0218 01:29:32.405450 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:29:36 crc kubenswrapper[4847]: E0218 01:29:36.408744 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:29:43 crc kubenswrapper[4847]: E0218 01:29:43.407209 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:44 crc kubenswrapper[4847]: I0218 01:29:44.404884 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:29:44 crc kubenswrapper[4847]: E0218 01:29:44.405401 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:29:49 crc kubenswrapper[4847]: E0218 01:29:49.408118 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:29:55 crc kubenswrapper[4847]: I0218 01:29:55.404845 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:29:55 crc kubenswrapper[4847]: E0218 01:29:55.407668 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:29:55 crc kubenswrapper[4847]: I0218 01:29:55.985113 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292"} Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.186647 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9"] Feb 18 01:30:00 crc kubenswrapper[4847]: E0218 01:30:00.188822 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="registry-server" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.188962 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="registry-server" Feb 18 01:30:00 crc kubenswrapper[4847]: E0218 01:30:00.189103 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="extract-content" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.189245 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="extract-content" Feb 18 01:30:00 crc kubenswrapper[4847]: E0218 01:30:00.189407 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="extract-utilities" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.189529 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="extract-utilities" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.189964 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="66478b16-41f8-497e-88ba-bdd100e1e33a" containerName="registry-server" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.191145 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.196020 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.196532 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.228707 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9"] Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.380554 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w66d7\" (UniqueName: \"kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.380683 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.380738 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.482614 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w66d7\" (UniqueName: \"kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.482690 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.482724 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.483679 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.502308 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.510337 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w66d7\" (UniqueName: \"kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7\") pod \"collect-profiles-29522970-2zhv9\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.527056 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:00 crc kubenswrapper[4847]: I0218 01:30:00.978550 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9"] Feb 18 01:30:01 crc kubenswrapper[4847]: I0218 01:30:01.050682 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" event={"ID":"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36","Type":"ContainerStarted","Data":"d21a8ed56ccb1a06a6a19d531e15d2f9ec8fd541176d8c5afeb391625b083235"} Feb 18 01:30:02 crc kubenswrapper[4847]: I0218 01:30:02.066023 4847 generic.go:334] "Generic (PLEG): container finished" podID="cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" containerID="a3d41c690555272b105a205b1d53219daaa07a2505964fa18305e1f3b89632b0" exitCode=0 Feb 18 01:30:02 crc kubenswrapper[4847]: I0218 01:30:02.066112 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" event={"ID":"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36","Type":"ContainerDied","Data":"a3d41c690555272b105a205b1d53219daaa07a2505964fa18305e1f3b89632b0"} Feb 18 01:30:03 crc kubenswrapper[4847]: E0218 01:30:03.405613 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.470475 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.552237 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w66d7\" (UniqueName: \"kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7\") pod \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.552296 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume\") pod \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.552500 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume\") pod \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\" (UID: \"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36\") " Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.553022 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume" (OuterVolumeSpecName: "config-volume") pod "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" (UID: "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.558083 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" (UID: "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.558695 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7" (OuterVolumeSpecName: "kube-api-access-w66d7") pod "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" (UID: "cc9e0a9d-caf4-4a54-a69c-dd78ae948c36"). InnerVolumeSpecName "kube-api-access-w66d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.654952 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w66d7\" (UniqueName: \"kubernetes.io/projected/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-kube-api-access-w66d7\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.655007 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:03 crc kubenswrapper[4847]: I0218 01:30:03.655017 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc9e0a9d-caf4-4a54-a69c-dd78ae948c36-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:30:04 crc kubenswrapper[4847]: I0218 01:30:04.090007 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" event={"ID":"cc9e0a9d-caf4-4a54-a69c-dd78ae948c36","Type":"ContainerDied","Data":"d21a8ed56ccb1a06a6a19d531e15d2f9ec8fd541176d8c5afeb391625b083235"} Feb 18 01:30:04 crc kubenswrapper[4847]: I0218 01:30:04.090043 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d21a8ed56ccb1a06a6a19d531e15d2f9ec8fd541176d8c5afeb391625b083235" Feb 18 01:30:04 crc kubenswrapper[4847]: I0218 01:30:04.090083 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522970-2zhv9" Feb 18 01:30:04 crc kubenswrapper[4847]: I0218 01:30:04.582855 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m"] Feb 18 01:30:04 crc kubenswrapper[4847]: I0218 01:30:04.592647 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522925-48n2m"] Feb 18 01:30:05 crc kubenswrapper[4847]: I0218 01:30:05.423587 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2011530e-7707-49e4-b5a7-f7867a3b57bb" path="/var/lib/kubelet/pods/2011530e-7707-49e4-b5a7-f7867a3b57bb/volumes" Feb 18 01:30:08 crc kubenswrapper[4847]: E0218 01:30:08.407213 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:30:14 crc kubenswrapper[4847]: E0218 01:30:14.407994 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:30:21 crc kubenswrapper[4847]: E0218 01:30:21.420728 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:30:27 crc kubenswrapper[4847]: E0218 01:30:27.416645 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:30:32 crc kubenswrapper[4847]: E0218 01:30:32.405757 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:30:35 crc kubenswrapper[4847]: I0218 01:30:35.603434 4847 scope.go:117] "RemoveContainer" containerID="935d10759f617c3c16be97d67c5f8be33850d2b8a7ef948ed5ad66e297006405" Feb 18 01:30:39 crc kubenswrapper[4847]: E0218 01:30:39.406549 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:30:47 crc kubenswrapper[4847]: E0218 01:30:47.415725 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.262642 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:30:49 crc kubenswrapper[4847]: E0218 01:30:49.263713 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" containerName="collect-profiles" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.263739 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" containerName="collect-profiles" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.264154 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9e0a9d-caf4-4a54-a69c-dd78ae948c36" containerName="collect-profiles" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.266704 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.277029 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.317836 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmtjx\" (UniqueName: \"kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.318106 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.318662 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.419951 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmtjx\" (UniqueName: \"kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.420060 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.420108 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.420650 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.420757 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.443909 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmtjx\" (UniqueName: \"kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx\") pod \"redhat-operators-qnqkz\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:49 crc kubenswrapper[4847]: I0218 01:30:49.602011 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:50 crc kubenswrapper[4847]: I0218 01:30:50.098132 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:30:50 crc kubenswrapper[4847]: I0218 01:30:50.117137 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerStarted","Data":"d71ff73c77d252a8ec7c10bb6d03b8e854ad5ec16900f408aec86a9de6fc0977"} Feb 18 01:30:51 crc kubenswrapper[4847]: I0218 01:30:51.132170 4847 generic.go:334] "Generic (PLEG): container finished" podID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerID="bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f" exitCode=0 Feb 18 01:30:51 crc kubenswrapper[4847]: I0218 01:30:51.132254 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerDied","Data":"bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f"} Feb 18 01:30:52 crc kubenswrapper[4847]: E0218 01:30:52.407157 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:30:53 crc kubenswrapper[4847]: I0218 01:30:53.154761 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerStarted","Data":"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6"} Feb 18 01:30:56 crc kubenswrapper[4847]: I0218 01:30:56.191760 4847 generic.go:334] "Generic (PLEG): container finished" podID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerID="fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6" exitCode=0 Feb 18 01:30:56 crc kubenswrapper[4847]: I0218 01:30:56.192409 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerDied","Data":"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6"} Feb 18 01:30:57 crc kubenswrapper[4847]: I0218 01:30:57.205219 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerStarted","Data":"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03"} Feb 18 01:30:57 crc kubenswrapper[4847]: I0218 01:30:57.244751 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnqkz" podStartSLOduration=2.778447602 podStartE2EDuration="8.244729575s" podCreationTimestamp="2026-02-18 01:30:49 +0000 UTC" firstStartedPulling="2026-02-18 01:30:51.136840397 +0000 UTC m=+3924.514191339" lastFinishedPulling="2026-02-18 01:30:56.60312237 +0000 UTC m=+3929.980473312" observedRunningTime="2026-02-18 01:30:57.227523464 +0000 UTC m=+3930.604874436" watchObservedRunningTime="2026-02-18 01:30:57.244729575 +0000 UTC m=+3930.622080537" Feb 18 01:30:59 crc kubenswrapper[4847]: I0218 01:30:59.605875 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:30:59 crc kubenswrapper[4847]: I0218 01:30:59.606475 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:31:00 crc kubenswrapper[4847]: I0218 01:31:00.682492 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnqkz" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="registry-server" probeResult="failure" output=< Feb 18 01:31:00 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:31:00 crc kubenswrapper[4847]: > Feb 18 01:31:01 crc kubenswrapper[4847]: E0218 01:31:01.409012 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:31:06 crc kubenswrapper[4847]: E0218 01:31:06.409218 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:31:09 crc kubenswrapper[4847]: I0218 01:31:09.672645 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:31:09 crc kubenswrapper[4847]: I0218 01:31:09.729586 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:31:09 crc kubenswrapper[4847]: I0218 01:31:09.926858 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:31:11 crc kubenswrapper[4847]: I0218 01:31:11.381326 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnqkz" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="registry-server" containerID="cri-o://5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03" gracePeriod=2 Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.012466 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.172101 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmtjx\" (UniqueName: \"kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx\") pod \"0067e007-fdb9-4894-a8aa-ac37778a9f70\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.172335 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content\") pod \"0067e007-fdb9-4894-a8aa-ac37778a9f70\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.172502 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities\") pod \"0067e007-fdb9-4894-a8aa-ac37778a9f70\" (UID: \"0067e007-fdb9-4894-a8aa-ac37778a9f70\") " Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.173680 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities" (OuterVolumeSpecName: "utilities") pod "0067e007-fdb9-4894-a8aa-ac37778a9f70" (UID: "0067e007-fdb9-4894-a8aa-ac37778a9f70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.182974 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx" (OuterVolumeSpecName: "kube-api-access-jmtjx") pod "0067e007-fdb9-4894-a8aa-ac37778a9f70" (UID: "0067e007-fdb9-4894-a8aa-ac37778a9f70"). InnerVolumeSpecName "kube-api-access-jmtjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.275491 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.275824 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmtjx\" (UniqueName: \"kubernetes.io/projected/0067e007-fdb9-4894-a8aa-ac37778a9f70-kube-api-access-jmtjx\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.307243 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0067e007-fdb9-4894-a8aa-ac37778a9f70" (UID: "0067e007-fdb9-4894-a8aa-ac37778a9f70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.377265 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0067e007-fdb9-4894-a8aa-ac37778a9f70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.398777 4847 generic.go:334] "Generic (PLEG): container finished" podID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerID="5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03" exitCode=0 Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.398836 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerDied","Data":"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03"} Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.398867 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnqkz" event={"ID":"0067e007-fdb9-4894-a8aa-ac37778a9f70","Type":"ContainerDied","Data":"d71ff73c77d252a8ec7c10bb6d03b8e854ad5ec16900f408aec86a9de6fc0977"} Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.398834 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnqkz" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.398934 4847 scope.go:117] "RemoveContainer" containerID="5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.444730 4847 scope.go:117] "RemoveContainer" containerID="fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.453195 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.463390 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnqkz"] Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.486730 4847 scope.go:117] "RemoveContainer" containerID="bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.543840 4847 scope.go:117] "RemoveContainer" containerID="5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03" Feb 18 01:31:12 crc kubenswrapper[4847]: E0218 01:31:12.544280 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03\": container with ID starting with 5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03 not found: ID does not exist" containerID="5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.544316 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03"} err="failed to get container status \"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03\": rpc error: code = NotFound desc = could not find container \"5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03\": container with ID starting with 5388c683de9a3aa8ff5543f1f3813d68fe35e9ca204c0cca827ff917f43cdc03 not found: ID does not exist" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.544336 4847 scope.go:117] "RemoveContainer" containerID="fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6" Feb 18 01:31:12 crc kubenswrapper[4847]: E0218 01:31:12.544661 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6\": container with ID starting with fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6 not found: ID does not exist" containerID="fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.544685 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6"} err="failed to get container status \"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6\": rpc error: code = NotFound desc = could not find container \"fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6\": container with ID starting with fb2eeeee6450e1b452865b0e5a867a3c2640160199333c2f9d842508d2241ac6 not found: ID does not exist" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.544704 4847 scope.go:117] "RemoveContainer" containerID="bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f" Feb 18 01:31:12 crc kubenswrapper[4847]: E0218 01:31:12.544922 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f\": container with ID starting with bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f not found: ID does not exist" containerID="bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f" Feb 18 01:31:12 crc kubenswrapper[4847]: I0218 01:31:12.544942 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f"} err="failed to get container status \"bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f\": rpc error: code = NotFound desc = could not find container \"bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f\": container with ID starting with bbb371ac98bdffd2b52bd76020d5852cc7324fda4f57b3cc7f84dd3444b3ca1f not found: ID does not exist" Feb 18 01:31:13 crc kubenswrapper[4847]: E0218 01:31:13.408861 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:31:13 crc kubenswrapper[4847]: I0218 01:31:13.426735 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" path="/var/lib/kubelet/pods/0067e007-fdb9-4894-a8aa-ac37778a9f70/volumes" Feb 18 01:31:19 crc kubenswrapper[4847]: E0218 01:31:19.408378 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:31:25 crc kubenswrapper[4847]: E0218 01:31:25.409001 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:31:33 crc kubenswrapper[4847]: E0218 01:31:33.407452 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.228182 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:35 crc kubenswrapper[4847]: E0218 01:31:35.229112 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="registry-server" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.229129 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="registry-server" Feb 18 01:31:35 crc kubenswrapper[4847]: E0218 01:31:35.229147 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="extract-utilities" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.229155 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="extract-utilities" Feb 18 01:31:35 crc kubenswrapper[4847]: E0218 01:31:35.229164 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="extract-content" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.229173 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="extract-content" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.229431 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="0067e007-fdb9-4894-a8aa-ac37778a9f70" containerName="registry-server" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.231205 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.248299 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.354129 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.354209 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.354799 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2kr\" (UniqueName: \"kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.456994 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2kr\" (UniqueName: \"kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.457113 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.457144 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.457707 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.457756 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.489972 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2kr\" (UniqueName: \"kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr\") pod \"redhat-marketplace-5mx2m\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:35 crc kubenswrapper[4847]: I0218 01:31:35.559524 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:36 crc kubenswrapper[4847]: I0218 01:31:36.097907 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:36 crc kubenswrapper[4847]: I0218 01:31:36.703723 4847 generic.go:334] "Generic (PLEG): container finished" podID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerID="6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540" exitCode=0 Feb 18 01:31:36 crc kubenswrapper[4847]: I0218 01:31:36.703975 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerDied","Data":"6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540"} Feb 18 01:31:36 crc kubenswrapper[4847]: I0218 01:31:36.704002 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerStarted","Data":"2e02955e901c548dd51940dba9c4fd2df9dced431b654bf7742335c0a1025a48"} Feb 18 01:31:37 crc kubenswrapper[4847]: I0218 01:31:37.724380 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerStarted","Data":"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4"} Feb 18 01:31:38 crc kubenswrapper[4847]: I0218 01:31:38.741566 4847 generic.go:334] "Generic (PLEG): container finished" podID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerID="7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4" exitCode=0 Feb 18 01:31:38 crc kubenswrapper[4847]: I0218 01:31:38.742190 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerDied","Data":"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4"} Feb 18 01:31:39 crc kubenswrapper[4847]: E0218 01:31:39.417835 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:31:39 crc kubenswrapper[4847]: I0218 01:31:39.755828 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerStarted","Data":"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96"} Feb 18 01:31:39 crc kubenswrapper[4847]: I0218 01:31:39.784350 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5mx2m" podStartSLOduration=2.326928875 podStartE2EDuration="4.784326737s" podCreationTimestamp="2026-02-18 01:31:35 +0000 UTC" firstStartedPulling="2026-02-18 01:31:36.722631623 +0000 UTC m=+3970.099982585" lastFinishedPulling="2026-02-18 01:31:39.180029475 +0000 UTC m=+3972.557380447" observedRunningTime="2026-02-18 01:31:39.782861022 +0000 UTC m=+3973.160211964" watchObservedRunningTime="2026-02-18 01:31:39.784326737 +0000 UTC m=+3973.161677709" Feb 18 01:31:45 crc kubenswrapper[4847]: I0218 01:31:45.560693 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:45 crc kubenswrapper[4847]: I0218 01:31:45.561507 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:45 crc kubenswrapper[4847]: I0218 01:31:45.654712 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:45 crc kubenswrapper[4847]: I0218 01:31:45.882029 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:45 crc kubenswrapper[4847]: I0218 01:31:45.976963 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:46 crc kubenswrapper[4847]: E0218 01:31:46.409089 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:31:47 crc kubenswrapper[4847]: I0218 01:31:47.855588 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5mx2m" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="registry-server" containerID="cri-o://1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96" gracePeriod=2 Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.382357 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.503127 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw2kr\" (UniqueName: \"kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr\") pod \"556f61db-04d0-4258-83ba-cce7f7855b7d\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.503224 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities\") pod \"556f61db-04d0-4258-83ba-cce7f7855b7d\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.504919 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content\") pod \"556f61db-04d0-4258-83ba-cce7f7855b7d\" (UID: \"556f61db-04d0-4258-83ba-cce7f7855b7d\") " Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.505908 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities" (OuterVolumeSpecName: "utilities") pod "556f61db-04d0-4258-83ba-cce7f7855b7d" (UID: "556f61db-04d0-4258-83ba-cce7f7855b7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.506517 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.514088 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr" (OuterVolumeSpecName: "kube-api-access-bw2kr") pod "556f61db-04d0-4258-83ba-cce7f7855b7d" (UID: "556f61db-04d0-4258-83ba-cce7f7855b7d"). InnerVolumeSpecName "kube-api-access-bw2kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.554640 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "556f61db-04d0-4258-83ba-cce7f7855b7d" (UID: "556f61db-04d0-4258-83ba-cce7f7855b7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.608838 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw2kr\" (UniqueName: \"kubernetes.io/projected/556f61db-04d0-4258-83ba-cce7f7855b7d-kube-api-access-bw2kr\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.608888 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/556f61db-04d0-4258-83ba-cce7f7855b7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.874283 4847 generic.go:334] "Generic (PLEG): container finished" podID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerID="1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96" exitCode=0 Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.874424 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mx2m" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.874431 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerDied","Data":"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96"} Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.875131 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mx2m" event={"ID":"556f61db-04d0-4258-83ba-cce7f7855b7d","Type":"ContainerDied","Data":"2e02955e901c548dd51940dba9c4fd2df9dced431b654bf7742335c0a1025a48"} Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.875172 4847 scope.go:117] "RemoveContainer" containerID="1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.924850 4847 scope.go:117] "RemoveContainer" containerID="7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.948195 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.950102 4847 scope.go:117] "RemoveContainer" containerID="6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540" Feb 18 01:31:48 crc kubenswrapper[4847]: I0218 01:31:48.960671 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mx2m"] Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.025867 4847 scope.go:117] "RemoveContainer" containerID="1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96" Feb 18 01:31:49 crc kubenswrapper[4847]: E0218 01:31:49.026274 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96\": container with ID starting with 1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96 not found: ID does not exist" containerID="1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.026335 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96"} err="failed to get container status \"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96\": rpc error: code = NotFound desc = could not find container \"1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96\": container with ID starting with 1bb471277b2858582428cec9feaff5de8391e8902c3a9a0331bc2bbf95267e96 not found: ID does not exist" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.026361 4847 scope.go:117] "RemoveContainer" containerID="7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4" Feb 18 01:31:49 crc kubenswrapper[4847]: E0218 01:31:49.026790 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4\": container with ID starting with 7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4 not found: ID does not exist" containerID="7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.026850 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4"} err="failed to get container status \"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4\": rpc error: code = NotFound desc = could not find container \"7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4\": container with ID starting with 7b3825cc9ad54dcc20cc7ca813375924779bd87fa3a5bf495254145e2b06dde4 not found: ID does not exist" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.026898 4847 scope.go:117] "RemoveContainer" containerID="6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540" Feb 18 01:31:49 crc kubenswrapper[4847]: E0218 01:31:49.027305 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540\": container with ID starting with 6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540 not found: ID does not exist" containerID="6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.027358 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540"} err="failed to get container status \"6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540\": rpc error: code = NotFound desc = could not find container \"6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540\": container with ID starting with 6ce4fde1ea629a92e5e8cf93c5329f42555c56937a8aa7d988ccc7920dec7540 not found: ID does not exist" Feb 18 01:31:49 crc kubenswrapper[4847]: I0218 01:31:49.417766 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" path="/var/lib/kubelet/pods/556f61db-04d0-4258-83ba-cce7f7855b7d/volumes" Feb 18 01:31:53 crc kubenswrapper[4847]: E0218 01:31:53.408396 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:31:58 crc kubenswrapper[4847]: E0218 01:31:58.407631 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:32:08 crc kubenswrapper[4847]: E0218 01:32:08.407860 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:32:10 crc kubenswrapper[4847]: E0218 01:32:10.408172 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:32:23 crc kubenswrapper[4847]: E0218 01:32:23.409160 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:32:23 crc kubenswrapper[4847]: I0218 01:32:23.491475 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:32:23 crc kubenswrapper[4847]: I0218 01:32:23.491565 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:32:25 crc kubenswrapper[4847]: E0218 01:32:25.408202 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:32:34 crc kubenswrapper[4847]: E0218 01:32:34.407719 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:32:36 crc kubenswrapper[4847]: E0218 01:32:36.405915 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:32:49 crc kubenswrapper[4847]: E0218 01:32:49.408378 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:32:50 crc kubenswrapper[4847]: E0218 01:32:50.406356 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:32:53 crc kubenswrapper[4847]: I0218 01:32:53.492234 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:32:53 crc kubenswrapper[4847]: I0218 01:32:53.492889 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:33:01 crc kubenswrapper[4847]: I0218 01:33:01.406570 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:33:01 crc kubenswrapper[4847]: E0218 01:33:01.545133 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:33:01 crc kubenswrapper[4847]: E0218 01:33:01.545429 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:33:01 crc kubenswrapper[4847]: E0218 01:33:01.545617 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:33:01 crc kubenswrapper[4847]: E0218 01:33:01.546914 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:33:02 crc kubenswrapper[4847]: E0218 01:33:02.407319 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:33:14 crc kubenswrapper[4847]: E0218 01:33:14.407469 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:33:14 crc kubenswrapper[4847]: E0218 01:33:14.407469 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:33:23 crc kubenswrapper[4847]: I0218 01:33:23.491351 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:33:23 crc kubenswrapper[4847]: I0218 01:33:23.491912 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:33:23 crc kubenswrapper[4847]: I0218 01:33:23.491959 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:33:23 crc kubenswrapper[4847]: I0218 01:33:23.492824 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:33:23 crc kubenswrapper[4847]: I0218 01:33:23.492891 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292" gracePeriod=600 Feb 18 01:33:24 crc kubenswrapper[4847]: I0218 01:33:24.182911 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292" exitCode=0 Feb 18 01:33:24 crc kubenswrapper[4847]: I0218 01:33:24.183006 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292"} Feb 18 01:33:24 crc kubenswrapper[4847]: I0218 01:33:24.183172 4847 scope.go:117] "RemoveContainer" containerID="4ad2dd7b6676d2d8e9c66253c6d279107493631ac8634a33de278680d3156b8c" Feb 18 01:33:25 crc kubenswrapper[4847]: I0218 01:33:25.198507 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271"} Feb 18 01:33:26 crc kubenswrapper[4847]: E0218 01:33:26.407478 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:33:28 crc kubenswrapper[4847]: E0218 01:33:28.406234 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:33:40 crc kubenswrapper[4847]: E0218 01:33:40.537879 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:33:40 crc kubenswrapper[4847]: E0218 01:33:40.538410 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:33:40 crc kubenswrapper[4847]: E0218 01:33:40.538558 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:33:40 crc kubenswrapper[4847]: E0218 01:33:40.539882 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:33:42 crc kubenswrapper[4847]: E0218 01:33:42.405420 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:33:54 crc kubenswrapper[4847]: E0218 01:33:54.406704 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:33:57 crc kubenswrapper[4847]: E0218 01:33:57.415340 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:34:08 crc kubenswrapper[4847]: E0218 01:34:08.409000 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:34:09 crc kubenswrapper[4847]: E0218 01:34:09.406482 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:34:20 crc kubenswrapper[4847]: E0218 01:34:20.408057 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:34:24 crc kubenswrapper[4847]: E0218 01:34:24.408236 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:34:34 crc kubenswrapper[4847]: E0218 01:34:34.406721 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:34:38 crc kubenswrapper[4847]: E0218 01:34:38.407072 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:34:45 crc kubenswrapper[4847]: E0218 01:34:45.410533 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:34:49 crc kubenswrapper[4847]: E0218 01:34:49.408212 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:35:00 crc kubenswrapper[4847]: E0218 01:35:00.406414 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:35:01 crc kubenswrapper[4847]: E0218 01:35:01.407398 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:35:12 crc kubenswrapper[4847]: E0218 01:35:12.406581 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:35:15 crc kubenswrapper[4847]: E0218 01:35:15.407636 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:35:25 crc kubenswrapper[4847]: E0218 01:35:25.407567 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:35:29 crc kubenswrapper[4847]: E0218 01:35:29.409527 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:35:37 crc kubenswrapper[4847]: E0218 01:35:37.421740 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:35:43 crc kubenswrapper[4847]: E0218 01:35:43.407963 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:35:50 crc kubenswrapper[4847]: E0218 01:35:50.409514 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:35:53 crc kubenswrapper[4847]: I0218 01:35:53.492135 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:35:53 crc kubenswrapper[4847]: I0218 01:35:53.492565 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:35:54 crc kubenswrapper[4847]: E0218 01:35:54.407411 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:36:05 crc kubenswrapper[4847]: E0218 01:36:05.409140 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:36:07 crc kubenswrapper[4847]: E0218 01:36:07.431865 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:36:19 crc kubenswrapper[4847]: E0218 01:36:19.408952 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:36:20 crc kubenswrapper[4847]: E0218 01:36:20.407278 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:36:23 crc kubenswrapper[4847]: I0218 01:36:23.491796 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:36:23 crc kubenswrapper[4847]: I0218 01:36:23.492370 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:36:30 crc kubenswrapper[4847]: E0218 01:36:30.407301 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:36:32 crc kubenswrapper[4847]: E0218 01:36:32.405999 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:36:43 crc kubenswrapper[4847]: E0218 01:36:43.407345 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:36:47 crc kubenswrapper[4847]: E0218 01:36:47.415289 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.370213 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:36:52 crc kubenswrapper[4847]: E0218 01:36:52.371120 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="extract-content" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.371131 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="extract-content" Feb 18 01:36:52 crc kubenswrapper[4847]: E0218 01:36:52.371169 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="registry-server" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.371175 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="registry-server" Feb 18 01:36:52 crc kubenswrapper[4847]: E0218 01:36:52.371194 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="extract-utilities" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.371201 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="extract-utilities" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.371370 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="556f61db-04d0-4258-83ba-cce7f7855b7d" containerName="registry-server" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.373993 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.408976 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.467371 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.469832 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.486020 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.492903 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nztfz\" (UniqueName: \"kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.492974 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.492999 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.594971 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nztfz\" (UniqueName: \"kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595071 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595103 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595168 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595275 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vrff\" (UniqueName: \"kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595349 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.595953 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.596092 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.623907 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nztfz\" (UniqueName: \"kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz\") pod \"community-operators-b5b2z\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.697482 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vrff\" (UniqueName: \"kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.697862 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.698033 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.698253 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.698412 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.712910 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.725687 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vrff\" (UniqueName: \"kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff\") pod \"certified-operators-898db\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:52 crc kubenswrapper[4847]: I0218 01:36:52.789297 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.353784 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.491891 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.491975 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.492025 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.492853 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.492904 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" gracePeriod=600 Feb 18 01:36:53 crc kubenswrapper[4847]: I0218 01:36:53.496081 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:36:53 crc kubenswrapper[4847]: W0218 01:36:53.586594 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa1c140_eb5a_43e7_a716_d3a3f7b2b89c.slice/crio-3cda605c6659d345b55f723d8d84e01ba1cb1deb71b8dd57ee19ce8b5421ae9e WatchSource:0}: Error finding container 3cda605c6659d345b55f723d8d84e01ba1cb1deb71b8dd57ee19ce8b5421ae9e: Status 404 returned error can't find the container with id 3cda605c6659d345b55f723d8d84e01ba1cb1deb71b8dd57ee19ce8b5421ae9e Feb 18 01:36:53 crc kubenswrapper[4847]: W0218 01:36:53.592308 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf09d8132_0500_4681_a7b8_15c4b446ed34.slice/crio-784e3a56bd3ded8bbe23f8af0573c6c0deee835b46dc3b31880e91be47fdebec WatchSource:0}: Error finding container 784e3a56bd3ded8bbe23f8af0573c6c0deee835b46dc3b31880e91be47fdebec: Status 404 returned error can't find the container with id 784e3a56bd3ded8bbe23f8af0573c6c0deee835b46dc3b31880e91be47fdebec Feb 18 01:36:53 crc kubenswrapper[4847]: E0218 01:36:53.640979 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.366686 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" exitCode=0 Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.366788 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271"} Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.367171 4847 scope.go:117] "RemoveContainer" containerID="6662f99cde0f39692de20655adaecf6fb0a06da58dc0042967bafe9377519292" Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.368480 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:36:54 crc kubenswrapper[4847]: E0218 01:36:54.369337 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.370511 4847 generic.go:334] "Generic (PLEG): container finished" podID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerID="0a771c2e0ad421fa01ed6b70ea1341b320dcbe85955f3a8b9a7ff6e9c0e2b2eb" exitCode=0 Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.370649 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerDied","Data":"0a771c2e0ad421fa01ed6b70ea1341b320dcbe85955f3a8b9a7ff6e9c0e2b2eb"} Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.370729 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerStarted","Data":"3cda605c6659d345b55f723d8d84e01ba1cb1deb71b8dd57ee19ce8b5421ae9e"} Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.374242 4847 generic.go:334] "Generic (PLEG): container finished" podID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerID="c9731c8d6197e63667f75876318fb3879e25933e3c0ea36c54b9789c3b3d19a5" exitCode=0 Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.374289 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerDied","Data":"c9731c8d6197e63667f75876318fb3879e25933e3c0ea36c54b9789c3b3d19a5"} Feb 18 01:36:54 crc kubenswrapper[4847]: I0218 01:36:54.374322 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerStarted","Data":"784e3a56bd3ded8bbe23f8af0573c6c0deee835b46dc3b31880e91be47fdebec"} Feb 18 01:36:55 crc kubenswrapper[4847]: E0218 01:36:55.407718 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:36:56 crc kubenswrapper[4847]: I0218 01:36:56.400378 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerStarted","Data":"21f5255efd8f6fdb2fbc6fa1e5b9b12d7f794013f91813715af48d38ab82062c"} Feb 18 01:36:56 crc kubenswrapper[4847]: I0218 01:36:56.403663 4847 generic.go:334] "Generic (PLEG): container finished" podID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerID="7759b22c8f3fee79268fc7cf63191b84a5aee436cee48932a508b6903f5d5372" exitCode=0 Feb 18 01:36:56 crc kubenswrapper[4847]: I0218 01:36:56.403716 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerDied","Data":"7759b22c8f3fee79268fc7cf63191b84a5aee436cee48932a508b6903f5d5372"} Feb 18 01:36:57 crc kubenswrapper[4847]: I0218 01:36:57.427157 4847 generic.go:334] "Generic (PLEG): container finished" podID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerID="21f5255efd8f6fdb2fbc6fa1e5b9b12d7f794013f91813715af48d38ab82062c" exitCode=0 Feb 18 01:36:57 crc kubenswrapper[4847]: I0218 01:36:57.430436 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerStarted","Data":"60786b54af3fcca805d81fc8e10c5c5730ec380dc25934d7e6b368bf5f205f7b"} Feb 18 01:36:57 crc kubenswrapper[4847]: I0218 01:36:57.430498 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerDied","Data":"21f5255efd8f6fdb2fbc6fa1e5b9b12d7f794013f91813715af48d38ab82062c"} Feb 18 01:36:57 crc kubenswrapper[4847]: I0218 01:36:57.469728 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-898db" podStartSLOduration=3.000738991 podStartE2EDuration="5.469704615s" podCreationTimestamp="2026-02-18 01:36:52 +0000 UTC" firstStartedPulling="2026-02-18 01:36:54.37644357 +0000 UTC m=+4287.753794552" lastFinishedPulling="2026-02-18 01:36:56.845409224 +0000 UTC m=+4290.222760176" observedRunningTime="2026-02-18 01:36:57.465049521 +0000 UTC m=+4290.842400473" watchObservedRunningTime="2026-02-18 01:36:57.469704615 +0000 UTC m=+4290.847055557" Feb 18 01:36:58 crc kubenswrapper[4847]: I0218 01:36:58.441583 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerStarted","Data":"4d630f0bd6f82c69ca1f1387cfbd6811ad4f22b3f09e09fc18f488ebeecbb2cc"} Feb 18 01:36:58 crc kubenswrapper[4847]: I0218 01:36:58.475892 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b5b2z" podStartSLOduration=3.038798611 podStartE2EDuration="6.475873371s" podCreationTimestamp="2026-02-18 01:36:52 +0000 UTC" firstStartedPulling="2026-02-18 01:36:54.373130279 +0000 UTC m=+4287.750481271" lastFinishedPulling="2026-02-18 01:36:57.810205089 +0000 UTC m=+4291.187556031" observedRunningTime="2026-02-18 01:36:58.469806683 +0000 UTC m=+4291.847157665" watchObservedRunningTime="2026-02-18 01:36:58.475873371 +0000 UTC m=+4291.853224313" Feb 18 01:37:01 crc kubenswrapper[4847]: E0218 01:37:01.408375 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.713731 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.714114 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.772413 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.790088 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.790162 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:02 crc kubenswrapper[4847]: I0218 01:37:02.865433 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:03 crc kubenswrapper[4847]: I0218 01:37:03.585299 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:03 crc kubenswrapper[4847]: I0218 01:37:03.594395 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:05 crc kubenswrapper[4847]: I0218 01:37:05.404525 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:37:05 crc kubenswrapper[4847]: E0218 01:37:05.405380 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:37:05 crc kubenswrapper[4847]: I0218 01:37:05.678646 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:37:05 crc kubenswrapper[4847]: I0218 01:37:05.679664 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-898db" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="registry-server" containerID="cri-o://60786b54af3fcca805d81fc8e10c5c5730ec380dc25934d7e6b368bf5f205f7b" gracePeriod=2 Feb 18 01:37:05 crc kubenswrapper[4847]: I0218 01:37:05.865899 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:37:05 crc kubenswrapper[4847]: I0218 01:37:05.866540 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b5b2z" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="registry-server" containerID="cri-o://4d630f0bd6f82c69ca1f1387cfbd6811ad4f22b3f09e09fc18f488ebeecbb2cc" gracePeriod=2 Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.535499 4847 generic.go:334] "Generic (PLEG): container finished" podID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerID="4d630f0bd6f82c69ca1f1387cfbd6811ad4f22b3f09e09fc18f488ebeecbb2cc" exitCode=0 Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.535533 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerDied","Data":"4d630f0bd6f82c69ca1f1387cfbd6811ad4f22b3f09e09fc18f488ebeecbb2cc"} Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.540366 4847 generic.go:334] "Generic (PLEG): container finished" podID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerID="60786b54af3fcca805d81fc8e10c5c5730ec380dc25934d7e6b368bf5f205f7b" exitCode=0 Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.540442 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerDied","Data":"60786b54af3fcca805d81fc8e10c5c5730ec380dc25934d7e6b368bf5f205f7b"} Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.940539 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:06 crc kubenswrapper[4847]: I0218 01:37:06.946458 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102063 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content\") pod \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102466 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nztfz\" (UniqueName: \"kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz\") pod \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102487 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities\") pod \"f09d8132-0500-4681-a7b8-15c4b446ed34\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102564 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vrff\" (UniqueName: \"kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff\") pod \"f09d8132-0500-4681-a7b8-15c4b446ed34\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102712 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities\") pod \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\" (UID: \"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.102761 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content\") pod \"f09d8132-0500-4681-a7b8-15c4b446ed34\" (UID: \"f09d8132-0500-4681-a7b8-15c4b446ed34\") " Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.103591 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities" (OuterVolumeSpecName: "utilities") pod "f09d8132-0500-4681-a7b8-15c4b446ed34" (UID: "f09d8132-0500-4681-a7b8-15c4b446ed34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.104322 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities" (OuterVolumeSpecName: "utilities") pod "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" (UID: "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.108592 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz" (OuterVolumeSpecName: "kube-api-access-nztfz") pod "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" (UID: "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c"). InnerVolumeSpecName "kube-api-access-nztfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.119905 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff" (OuterVolumeSpecName: "kube-api-access-2vrff") pod "f09d8132-0500-4681-a7b8-15c4b446ed34" (UID: "f09d8132-0500-4681-a7b8-15c4b446ed34"). InnerVolumeSpecName "kube-api-access-2vrff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.206364 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vrff\" (UniqueName: \"kubernetes.io/projected/f09d8132-0500-4681-a7b8-15c4b446ed34-kube-api-access-2vrff\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.206464 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.206517 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.206532 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nztfz\" (UniqueName: \"kubernetes.io/projected/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-kube-api-access-nztfz\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.465162 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" (UID: "eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.486483 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f09d8132-0500-4681-a7b8-15c4b446ed34" (UID: "f09d8132-0500-4681-a7b8-15c4b446ed34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.515139 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.515198 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09d8132-0500-4681-a7b8-15c4b446ed34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.551102 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b5b2z" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.551093 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b5b2z" event={"ID":"eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c","Type":"ContainerDied","Data":"3cda605c6659d345b55f723d8d84e01ba1cb1deb71b8dd57ee19ce8b5421ae9e"} Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.551276 4847 scope.go:117] "RemoveContainer" containerID="4d630f0bd6f82c69ca1f1387cfbd6811ad4f22b3f09e09fc18f488ebeecbb2cc" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.558577 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-898db" event={"ID":"f09d8132-0500-4681-a7b8-15c4b446ed34","Type":"ContainerDied","Data":"784e3a56bd3ded8bbe23f8af0573c6c0deee835b46dc3b31880e91be47fdebec"} Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.558724 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-898db" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.590103 4847 scope.go:117] "RemoveContainer" containerID="21f5255efd8f6fdb2fbc6fa1e5b9b12d7f794013f91813715af48d38ab82062c" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.607672 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.618399 4847 scope.go:117] "RemoveContainer" containerID="0a771c2e0ad421fa01ed6b70ea1341b320dcbe85955f3a8b9a7ff6e9c0e2b2eb" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.619969 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b5b2z"] Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.631233 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.644556 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-898db"] Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.681665 4847 scope.go:117] "RemoveContainer" containerID="60786b54af3fcca805d81fc8e10c5c5730ec380dc25934d7e6b368bf5f205f7b" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.724068 4847 scope.go:117] "RemoveContainer" containerID="7759b22c8f3fee79268fc7cf63191b84a5aee436cee48932a508b6903f5d5372" Feb 18 01:37:07 crc kubenswrapper[4847]: I0218 01:37:07.749585 4847 scope.go:117] "RemoveContainer" containerID="c9731c8d6197e63667f75876318fb3879e25933e3c0ea36c54b9789c3b3d19a5" Feb 18 01:37:08 crc kubenswrapper[4847]: E0218 01:37:08.406837 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:37:09 crc kubenswrapper[4847]: I0218 01:37:09.426639 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" path="/var/lib/kubelet/pods/eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c/volumes" Feb 18 01:37:09 crc kubenswrapper[4847]: I0218 01:37:09.431049 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" path="/var/lib/kubelet/pods/f09d8132-0500-4681-a7b8-15c4b446ed34/volumes" Feb 18 01:37:13 crc kubenswrapper[4847]: E0218 01:37:13.408046 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:37:18 crc kubenswrapper[4847]: I0218 01:37:18.403996 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:37:18 crc kubenswrapper[4847]: E0218 01:37:18.405057 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:37:19 crc kubenswrapper[4847]: E0218 01:37:19.406597 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:37:27 crc kubenswrapper[4847]: E0218 01:37:27.420459 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:37:33 crc kubenswrapper[4847]: I0218 01:37:33.406153 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:37:33 crc kubenswrapper[4847]: E0218 01:37:33.407183 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:37:33 crc kubenswrapper[4847]: E0218 01:37:33.408482 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:37:42 crc kubenswrapper[4847]: E0218 01:37:42.407764 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:37:44 crc kubenswrapper[4847]: I0218 01:37:44.404173 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:37:44 crc kubenswrapper[4847]: E0218 01:37:44.404764 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:37:46 crc kubenswrapper[4847]: E0218 01:37:46.406383 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:37:54 crc kubenswrapper[4847]: E0218 01:37:54.407226 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:37:58 crc kubenswrapper[4847]: I0218 01:37:58.404907 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:37:58 crc kubenswrapper[4847]: E0218 01:37:58.405815 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:37:58 crc kubenswrapper[4847]: E0218 01:37:58.410399 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:38:09 crc kubenswrapper[4847]: E0218 01:38:09.412080 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:38:12 crc kubenswrapper[4847]: I0218 01:38:12.404911 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:38:12 crc kubenswrapper[4847]: E0218 01:38:12.405792 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:38:13 crc kubenswrapper[4847]: I0218 01:38:13.408160 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:38:13 crc kubenswrapper[4847]: E0218 01:38:13.541056 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:38:13 crc kubenswrapper[4847]: E0218 01:38:13.541222 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:38:13 crc kubenswrapper[4847]: E0218 01:38:13.541642 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:38:13 crc kubenswrapper[4847]: E0218 01:38:13.542973 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:38:20 crc kubenswrapper[4847]: E0218 01:38:20.409685 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:38:26 crc kubenswrapper[4847]: I0218 01:38:26.405078 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:38:26 crc kubenswrapper[4847]: E0218 01:38:26.407881 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:38:26 crc kubenswrapper[4847]: E0218 01:38:26.408464 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:38:35 crc kubenswrapper[4847]: E0218 01:38:35.408267 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:38:38 crc kubenswrapper[4847]: E0218 01:38:38.408191 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:38:41 crc kubenswrapper[4847]: I0218 01:38:41.405402 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:38:41 crc kubenswrapper[4847]: E0218 01:38:41.406912 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:38:47 crc kubenswrapper[4847]: E0218 01:38:47.522103 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:38:47 crc kubenswrapper[4847]: E0218 01:38:47.522667 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:38:47 crc kubenswrapper[4847]: E0218 01:38:47.522812 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:38:47 crc kubenswrapper[4847]: E0218 01:38:47.524748 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:38:52 crc kubenswrapper[4847]: E0218 01:38:52.407143 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:38:54 crc kubenswrapper[4847]: I0218 01:38:54.405091 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:38:54 crc kubenswrapper[4847]: E0218 01:38:54.407058 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:39:02 crc kubenswrapper[4847]: E0218 01:39:02.411327 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:39:04 crc kubenswrapper[4847]: E0218 01:39:04.406873 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:39:05 crc kubenswrapper[4847]: I0218 01:39:05.404961 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:39:05 crc kubenswrapper[4847]: E0218 01:39:05.405529 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:39:16 crc kubenswrapper[4847]: E0218 01:39:16.408024 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:39:17 crc kubenswrapper[4847]: E0218 01:39:17.414946 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:39:20 crc kubenswrapper[4847]: I0218 01:39:20.404917 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:39:20 crc kubenswrapper[4847]: E0218 01:39:20.406008 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:39:27 crc kubenswrapper[4847]: E0218 01:39:27.419005 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:39:32 crc kubenswrapper[4847]: E0218 01:39:32.410105 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:39:35 crc kubenswrapper[4847]: I0218 01:39:35.405446 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:39:35 crc kubenswrapper[4847]: E0218 01:39:35.406868 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:39:39 crc kubenswrapper[4847]: E0218 01:39:39.407585 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:39:46 crc kubenswrapper[4847]: E0218 01:39:46.407800 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:39:50 crc kubenswrapper[4847]: I0218 01:39:50.405331 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:39:50 crc kubenswrapper[4847]: E0218 01:39:50.406328 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:39:54 crc kubenswrapper[4847]: E0218 01:39:54.407312 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:40:00 crc kubenswrapper[4847]: E0218 01:40:00.407307 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:40:05 crc kubenswrapper[4847]: I0218 01:40:05.404276 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:40:05 crc kubenswrapper[4847]: E0218 01:40:05.405415 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:40:06 crc kubenswrapper[4847]: E0218 01:40:06.406195 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:40:14 crc kubenswrapper[4847]: E0218 01:40:14.406784 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:40:17 crc kubenswrapper[4847]: E0218 01:40:17.421709 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:40:18 crc kubenswrapper[4847]: I0218 01:40:18.404997 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:40:18 crc kubenswrapper[4847]: E0218 01:40:18.406077 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:40:26 crc kubenswrapper[4847]: E0218 01:40:26.409462 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:40:32 crc kubenswrapper[4847]: I0218 01:40:32.404730 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:40:32 crc kubenswrapper[4847]: E0218 01:40:32.405866 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:40:32 crc kubenswrapper[4847]: E0218 01:40:32.407580 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:40:39 crc kubenswrapper[4847]: E0218 01:40:39.407279 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:40:44 crc kubenswrapper[4847]: I0218 01:40:44.404377 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:40:44 crc kubenswrapper[4847]: E0218 01:40:44.405834 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:40:45 crc kubenswrapper[4847]: E0218 01:40:45.407156 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:40:52 crc kubenswrapper[4847]: E0218 01:40:52.407898 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:40:56 crc kubenswrapper[4847]: I0218 01:40:56.404823 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:40:56 crc kubenswrapper[4847]: E0218 01:40:56.405794 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:40:57 crc kubenswrapper[4847]: E0218 01:40:57.424953 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:41:03 crc kubenswrapper[4847]: E0218 01:41:03.415548 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:41:11 crc kubenswrapper[4847]: I0218 01:41:11.405110 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:41:11 crc kubenswrapper[4847]: E0218 01:41:11.406319 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:41:12 crc kubenswrapper[4847]: E0218 01:41:12.412580 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:41:18 crc kubenswrapper[4847]: E0218 01:41:18.407318 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:41:26 crc kubenswrapper[4847]: I0218 01:41:26.404820 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:41:26 crc kubenswrapper[4847]: E0218 01:41:26.405674 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:41:27 crc kubenswrapper[4847]: E0218 01:41:27.433737 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:41:29 crc kubenswrapper[4847]: E0218 01:41:29.407716 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:41:39 crc kubenswrapper[4847]: I0218 01:41:39.404470 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:41:39 crc kubenswrapper[4847]: E0218 01:41:39.405202 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:41:40 crc kubenswrapper[4847]: E0218 01:41:40.407159 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:41:40 crc kubenswrapper[4847]: E0218 01:41:40.407504 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.859728 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860749 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="extract-utilities" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860766 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="extract-utilities" Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860813 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860821 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860840 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="extract-content" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860848 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="extract-content" Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860866 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="extract-utilities" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860874 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="extract-utilities" Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860894 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860902 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: E0218 01:41:49.860924 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="extract-content" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.860932 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="extract-content" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.861192 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09d8132-0500-4681-a7b8-15c4b446ed34" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.861220 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa1c140-eb5a-43e7-a716-d3a3f7b2b89c" containerName="registry-server" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.863006 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.875550 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.938266 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfs8p\" (UniqueName: \"kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.938648 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:49 crc kubenswrapper[4847]: I0218 01:41:49.938752 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.041132 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfs8p\" (UniqueName: \"kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.041209 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.041307 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.042100 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.042114 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.074362 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfs8p\" (UniqueName: \"kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p\") pod \"redhat-operators-tqw8d\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:50 crc kubenswrapper[4847]: I0218 01:41:50.200930 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.169744 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.244147 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.247158 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.258901 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.376731 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.376763 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.376939 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddspm\" (UniqueName: \"kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.478995 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.479039 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.479085 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddspm\" (UniqueName: \"kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.479815 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.479894 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.499503 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddspm\" (UniqueName: \"kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm\") pod \"redhat-marketplace-v8fmv\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.611053 4847 generic.go:334] "Generic (PLEG): container finished" podID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerID="21325b827c963a78578ddce39788a86feb7917b9628e04ef47a2e2ca5100eca8" exitCode=0 Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.611097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerDied","Data":"21325b827c963a78578ddce39788a86feb7917b9628e04ef47a2e2ca5100eca8"} Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.611127 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerStarted","Data":"76a6cf639fa407b3528a5a4a13ec708bbd56e6a4966c9a44f9f952484b92b6f8"} Feb 18 01:41:51 crc kubenswrapper[4847]: I0218 01:41:51.679076 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:41:52 crc kubenswrapper[4847]: I0218 01:41:52.116866 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:41:52 crc kubenswrapper[4847]: I0218 01:41:52.624888 4847 generic.go:334] "Generic (PLEG): container finished" podID="814379b8-9d14-4d63-89c2-768fbe251782" containerID="584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619" exitCode=0 Feb 18 01:41:52 crc kubenswrapper[4847]: I0218 01:41:52.624935 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerDied","Data":"584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619"} Feb 18 01:41:52 crc kubenswrapper[4847]: I0218 01:41:52.625816 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerStarted","Data":"7fc8072b944c7194eac92e9524782088927a969d7ba1ed8448bea88983fd9a88"} Feb 18 01:41:52 crc kubenswrapper[4847]: I0218 01:41:52.628937 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerStarted","Data":"1d167d599e7871efdf48dd236c8ebc9e0772a7533a45c2790bd4605396eb7410"} Feb 18 01:41:53 crc kubenswrapper[4847]: I0218 01:41:53.405034 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:41:53 crc kubenswrapper[4847]: E0218 01:41:53.406120 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:41:53 crc kubenswrapper[4847]: E0218 01:41:53.408496 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:41:53 crc kubenswrapper[4847]: I0218 01:41:53.646903 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerStarted","Data":"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca"} Feb 18 01:41:54 crc kubenswrapper[4847]: I0218 01:41:54.675417 4847 generic.go:334] "Generic (PLEG): container finished" podID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerID="1d167d599e7871efdf48dd236c8ebc9e0772a7533a45c2790bd4605396eb7410" exitCode=0 Feb 18 01:41:54 crc kubenswrapper[4847]: I0218 01:41:54.675480 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerDied","Data":"1d167d599e7871efdf48dd236c8ebc9e0772a7533a45c2790bd4605396eb7410"} Feb 18 01:41:55 crc kubenswrapper[4847]: E0218 01:41:55.408784 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:41:55 crc kubenswrapper[4847]: I0218 01:41:55.697152 4847 generic.go:334] "Generic (PLEG): container finished" podID="814379b8-9d14-4d63-89c2-768fbe251782" containerID="81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca" exitCode=0 Feb 18 01:41:55 crc kubenswrapper[4847]: I0218 01:41:55.697221 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerDied","Data":"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca"} Feb 18 01:41:56 crc kubenswrapper[4847]: I0218 01:41:56.712421 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerStarted","Data":"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59"} Feb 18 01:41:56 crc kubenswrapper[4847]: I0218 01:41:56.717755 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerStarted","Data":"c9ecdbe7446d485b95d589b8904cf616fef696ccb0233bba00961ee546ff242f"} Feb 18 01:41:56 crc kubenswrapper[4847]: I0218 01:41:56.734446 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8fmv" podStartSLOduration=2.236675779 podStartE2EDuration="5.734401934s" podCreationTimestamp="2026-02-18 01:41:51 +0000 UTC" firstStartedPulling="2026-02-18 01:41:52.627918921 +0000 UTC m=+4586.005269883" lastFinishedPulling="2026-02-18 01:41:56.125645086 +0000 UTC m=+4589.502996038" observedRunningTime="2026-02-18 01:41:56.731140544 +0000 UTC m=+4590.108491526" watchObservedRunningTime="2026-02-18 01:41:56.734401934 +0000 UTC m=+4590.111752926" Feb 18 01:41:56 crc kubenswrapper[4847]: I0218 01:41:56.766257 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tqw8d" podStartSLOduration=3.5337904 podStartE2EDuration="7.766226489s" podCreationTimestamp="2026-02-18 01:41:49 +0000 UTC" firstStartedPulling="2026-02-18 01:41:51.612769823 +0000 UTC m=+4584.990120765" lastFinishedPulling="2026-02-18 01:41:55.845205912 +0000 UTC m=+4589.222556854" observedRunningTime="2026-02-18 01:41:56.755096095 +0000 UTC m=+4590.132447037" watchObservedRunningTime="2026-02-18 01:41:56.766226489 +0000 UTC m=+4590.143577481" Feb 18 01:42:00 crc kubenswrapper[4847]: I0218 01:42:00.239266 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:00 crc kubenswrapper[4847]: I0218 01:42:00.239820 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:01 crc kubenswrapper[4847]: I0218 01:42:01.314522 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqw8d" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="registry-server" probeResult="failure" output=< Feb 18 01:42:01 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:42:01 crc kubenswrapper[4847]: > Feb 18 01:42:01 crc kubenswrapper[4847]: I0218 01:42:01.679559 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:01 crc kubenswrapper[4847]: I0218 01:42:01.680019 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:01 crc kubenswrapper[4847]: I0218 01:42:01.747707 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:01 crc kubenswrapper[4847]: I0218 01:42:01.824914 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:02 crc kubenswrapper[4847]: I0218 01:42:02.235010 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:42:03 crc kubenswrapper[4847]: I0218 01:42:03.800875 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8fmv" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="registry-server" containerID="cri-o://d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59" gracePeriod=2 Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.362371 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.423051 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content\") pod \"814379b8-9d14-4d63-89c2-768fbe251782\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.423198 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities\") pod \"814379b8-9d14-4d63-89c2-768fbe251782\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.423226 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddspm\" (UniqueName: \"kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm\") pod \"814379b8-9d14-4d63-89c2-768fbe251782\" (UID: \"814379b8-9d14-4d63-89c2-768fbe251782\") " Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.424797 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities" (OuterVolumeSpecName: "utilities") pod "814379b8-9d14-4d63-89c2-768fbe251782" (UID: "814379b8-9d14-4d63-89c2-768fbe251782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.437226 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm" (OuterVolumeSpecName: "kube-api-access-ddspm") pod "814379b8-9d14-4d63-89c2-768fbe251782" (UID: "814379b8-9d14-4d63-89c2-768fbe251782"). InnerVolumeSpecName "kube-api-access-ddspm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.458148 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "814379b8-9d14-4d63-89c2-768fbe251782" (UID: "814379b8-9d14-4d63-89c2-768fbe251782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.525356 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.525571 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/814379b8-9d14-4d63-89c2-768fbe251782-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.525654 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddspm\" (UniqueName: \"kubernetes.io/projected/814379b8-9d14-4d63-89c2-768fbe251782-kube-api-access-ddspm\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.811902 4847 generic.go:334] "Generic (PLEG): container finished" podID="814379b8-9d14-4d63-89c2-768fbe251782" containerID="d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59" exitCode=0 Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.811941 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerDied","Data":"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59"} Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.811965 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8fmv" event={"ID":"814379b8-9d14-4d63-89c2-768fbe251782","Type":"ContainerDied","Data":"7fc8072b944c7194eac92e9524782088927a969d7ba1ed8448bea88983fd9a88"} Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.811984 4847 scope.go:117] "RemoveContainer" containerID="d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.811988 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8fmv" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.857784 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.858201 4847 scope.go:117] "RemoveContainer" containerID="81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.871143 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8fmv"] Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.888676 4847 scope.go:117] "RemoveContainer" containerID="584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.953694 4847 scope.go:117] "RemoveContainer" containerID="d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59" Feb 18 01:42:04 crc kubenswrapper[4847]: E0218 01:42:04.954098 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59\": container with ID starting with d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59 not found: ID does not exist" containerID="d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.954138 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59"} err="failed to get container status \"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59\": rpc error: code = NotFound desc = could not find container \"d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59\": container with ID starting with d7abbf20b0a8c96ea34c45e1457515839c4fc4f899092b9c2a4d3f22d5b70c59 not found: ID does not exist" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.954157 4847 scope.go:117] "RemoveContainer" containerID="81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca" Feb 18 01:42:04 crc kubenswrapper[4847]: E0218 01:42:04.954623 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca\": container with ID starting with 81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca not found: ID does not exist" containerID="81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.954886 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca"} err="failed to get container status \"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca\": rpc error: code = NotFound desc = could not find container \"81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca\": container with ID starting with 81d1438ce3e7e81469ca016b41ac57539bf49fa247683c9c5ddafc0ce68028ca not found: ID does not exist" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.954933 4847 scope.go:117] "RemoveContainer" containerID="584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619" Feb 18 01:42:04 crc kubenswrapper[4847]: E0218 01:42:04.955279 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619\": container with ID starting with 584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619 not found: ID does not exist" containerID="584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619" Feb 18 01:42:04 crc kubenswrapper[4847]: I0218 01:42:04.955300 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619"} err="failed to get container status \"584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619\": rpc error: code = NotFound desc = could not find container \"584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619\": container with ID starting with 584ec746be498b08e91bb4679a20fe9e6539b49fabf28b0a02eac2bd83640619 not found: ID does not exist" Feb 18 01:42:05 crc kubenswrapper[4847]: I0218 01:42:05.427969 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814379b8-9d14-4d63-89c2-768fbe251782" path="/var/lib/kubelet/pods/814379b8-9d14-4d63-89c2-768fbe251782/volumes" Feb 18 01:42:06 crc kubenswrapper[4847]: I0218 01:42:06.404070 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:42:06 crc kubenswrapper[4847]: I0218 01:42:06.842982 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8"} Feb 18 01:42:07 crc kubenswrapper[4847]: E0218 01:42:07.416453 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:42:08 crc kubenswrapper[4847]: E0218 01:42:08.406361 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:42:10 crc kubenswrapper[4847]: I0218 01:42:10.289745 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:10 crc kubenswrapper[4847]: I0218 01:42:10.364236 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:10 crc kubenswrapper[4847]: I0218 01:42:10.537019 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:42:11 crc kubenswrapper[4847]: I0218 01:42:11.904287 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tqw8d" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="registry-server" containerID="cri-o://c9ecdbe7446d485b95d589b8904cf616fef696ccb0233bba00961ee546ff242f" gracePeriod=2 Feb 18 01:42:12 crc kubenswrapper[4847]: I0218 01:42:12.916277 4847 generic.go:334] "Generic (PLEG): container finished" podID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerID="c9ecdbe7446d485b95d589b8904cf616fef696ccb0233bba00961ee546ff242f" exitCode=0 Feb 18 01:42:12 crc kubenswrapper[4847]: I0218 01:42:12.916355 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerDied","Data":"c9ecdbe7446d485b95d589b8904cf616fef696ccb0233bba00961ee546ff242f"} Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.178806 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.258259 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities\") pod \"363d0cd5-dda4-4ef4-ab5c-971035c93645\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.258558 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfs8p\" (UniqueName: \"kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p\") pod \"363d0cd5-dda4-4ef4-ab5c-971035c93645\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.258713 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content\") pod \"363d0cd5-dda4-4ef4-ab5c-971035c93645\" (UID: \"363d0cd5-dda4-4ef4-ab5c-971035c93645\") " Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.259408 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities" (OuterVolumeSpecName: "utilities") pod "363d0cd5-dda4-4ef4-ab5c-971035c93645" (UID: "363d0cd5-dda4-4ef4-ab5c-971035c93645"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.265468 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p" (OuterVolumeSpecName: "kube-api-access-pfs8p") pod "363d0cd5-dda4-4ef4-ab5c-971035c93645" (UID: "363d0cd5-dda4-4ef4-ab5c-971035c93645"). InnerVolumeSpecName "kube-api-access-pfs8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.360388 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.360419 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfs8p\" (UniqueName: \"kubernetes.io/projected/363d0cd5-dda4-4ef4-ab5c-971035c93645-kube-api-access-pfs8p\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.394688 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "363d0cd5-dda4-4ef4-ab5c-971035c93645" (UID: "363d0cd5-dda4-4ef4-ab5c-971035c93645"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.462825 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/363d0cd5-dda4-4ef4-ab5c-971035c93645-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.931285 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqw8d" event={"ID":"363d0cd5-dda4-4ef4-ab5c-971035c93645","Type":"ContainerDied","Data":"76a6cf639fa407b3528a5a4a13ec708bbd56e6a4966c9a44f9f952484b92b6f8"} Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.931360 4847 scope.go:117] "RemoveContainer" containerID="c9ecdbe7446d485b95d589b8904cf616fef696ccb0233bba00961ee546ff242f" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.932584 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqw8d" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.961860 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.975902 4847 scope.go:117] "RemoveContainer" containerID="1d167d599e7871efdf48dd236c8ebc9e0772a7533a45c2790bd4605396eb7410" Feb 18 01:42:13 crc kubenswrapper[4847]: I0218 01:42:13.976663 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tqw8d"] Feb 18 01:42:14 crc kubenswrapper[4847]: I0218 01:42:14.002745 4847 scope.go:117] "RemoveContainer" containerID="21325b827c963a78578ddce39788a86feb7917b9628e04ef47a2e2ca5100eca8" Feb 18 01:42:15 crc kubenswrapper[4847]: I0218 01:42:15.429010 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" path="/var/lib/kubelet/pods/363d0cd5-dda4-4ef4-ab5c-971035c93645/volumes" Feb 18 01:42:20 crc kubenswrapper[4847]: E0218 01:42:20.410489 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:42:21 crc kubenswrapper[4847]: E0218 01:42:21.407105 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:42:32 crc kubenswrapper[4847]: E0218 01:42:32.407786 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:42:35 crc kubenswrapper[4847]: E0218 01:42:35.408977 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:42:45 crc kubenswrapper[4847]: E0218 01:42:45.408578 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:42:46 crc kubenswrapper[4847]: E0218 01:42:46.407414 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:42:57 crc kubenswrapper[4847]: E0218 01:42:57.421953 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:43:01 crc kubenswrapper[4847]: E0218 01:43:01.407771 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:43:10 crc kubenswrapper[4847]: E0218 01:43:10.424429 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:43:16 crc kubenswrapper[4847]: I0218 01:43:16.407718 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:43:16 crc kubenswrapper[4847]: E0218 01:43:16.546685 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:43:16 crc kubenswrapper[4847]: E0218 01:43:16.546776 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:43:16 crc kubenswrapper[4847]: E0218 01:43:16.546958 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:43:16 crc kubenswrapper[4847]: E0218 01:43:16.548240 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:43:24 crc kubenswrapper[4847]: E0218 01:43:24.405841 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:43:31 crc kubenswrapper[4847]: E0218 01:43:31.408027 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:43:39 crc kubenswrapper[4847]: E0218 01:43:39.418272 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:43:43 crc kubenswrapper[4847]: E0218 01:43:43.407843 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:43:52 crc kubenswrapper[4847]: E0218 01:43:52.543845 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:43:52 crc kubenswrapper[4847]: E0218 01:43:52.544679 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:43:52 crc kubenswrapper[4847]: E0218 01:43:52.544854 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:43:52 crc kubenswrapper[4847]: E0218 01:43:52.546129 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:43:55 crc kubenswrapper[4847]: E0218 01:43:55.409803 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:44:03 crc kubenswrapper[4847]: E0218 01:44:03.408678 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:44:08 crc kubenswrapper[4847]: E0218 01:44:08.406749 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:44:16 crc kubenswrapper[4847]: E0218 01:44:16.406994 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:44:23 crc kubenswrapper[4847]: E0218 01:44:23.406059 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:44:23 crc kubenswrapper[4847]: I0218 01:44:23.492012 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:44:23 crc kubenswrapper[4847]: I0218 01:44:23.492071 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:44:30 crc kubenswrapper[4847]: E0218 01:44:30.408321 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:44:36 crc kubenswrapper[4847]: E0218 01:44:36.406718 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:44:44 crc kubenswrapper[4847]: E0218 01:44:44.406330 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:44:44 crc kubenswrapper[4847]: I0218 01:44:44.776663 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="109c4d3d-c276-45ed-93d2-d1414e156fb9" containerName="galera" probeResult="failure" output="command timed out" Feb 18 01:44:44 crc kubenswrapper[4847]: I0218 01:44:44.778329 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="109c4d3d-c276-45ed-93d2-d1414e156fb9" containerName="galera" probeResult="failure" output="command timed out" Feb 18 01:44:48 crc kubenswrapper[4847]: E0218 01:44:48.408291 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:44:53 crc kubenswrapper[4847]: I0218 01:44:53.491902 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:44:53 crc kubenswrapper[4847]: I0218 01:44:53.492572 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:44:55 crc kubenswrapper[4847]: E0218 01:44:55.408665 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.186345 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd"] Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.187991 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188025 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.188060 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188079 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.188107 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188122 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.188158 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188172 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.188193 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188206 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="extract-utilities" Feb 18 01:45:00 crc kubenswrapper[4847]: E0218 01:45:00.188241 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188255 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="extract-content" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188793 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="363d0cd5-dda4-4ef4-ab5c-971035c93645" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.188874 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="814379b8-9d14-4d63-89c2-768fbe251782" containerName="registry-server" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.190376 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.193869 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.194173 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.221469 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd"] Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.334478 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f56w\" (UniqueName: \"kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.334803 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.335062 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.438593 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.438870 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.439170 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f56w\" (UniqueName: \"kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.440174 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.449688 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.468949 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f56w\" (UniqueName: \"kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w\") pod \"collect-profiles-29522985-kzdmd\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:00 crc kubenswrapper[4847]: I0218 01:45:00.527583 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:01 crc kubenswrapper[4847]: I0218 01:45:01.080312 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd"] Feb 18 01:45:01 crc kubenswrapper[4847]: I0218 01:45:01.203736 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" event={"ID":"e2820ae5-70dd-4191-bb0f-549feec7f559","Type":"ContainerStarted","Data":"43df1ff4689444811e29d38a67a3735fbd9736c117f7a418d83f3f4004221b50"} Feb 18 01:45:02 crc kubenswrapper[4847]: I0218 01:45:02.220527 4847 generic.go:334] "Generic (PLEG): container finished" podID="e2820ae5-70dd-4191-bb0f-549feec7f559" containerID="1491fb95bc07b204036e5debc2e2a4652ce5d1f770435f6ff525a40cfb9a53d5" exitCode=0 Feb 18 01:45:02 crc kubenswrapper[4847]: I0218 01:45:02.220591 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" event={"ID":"e2820ae5-70dd-4191-bb0f-549feec7f559","Type":"ContainerDied","Data":"1491fb95bc07b204036e5debc2e2a4652ce5d1f770435f6ff525a40cfb9a53d5"} Feb 18 01:45:03 crc kubenswrapper[4847]: E0218 01:45:03.406561 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.688797 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.828927 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume\") pod \"e2820ae5-70dd-4191-bb0f-549feec7f559\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.829000 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume\") pod \"e2820ae5-70dd-4191-bb0f-549feec7f559\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.829415 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f56w\" (UniqueName: \"kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w\") pod \"e2820ae5-70dd-4191-bb0f-549feec7f559\" (UID: \"e2820ae5-70dd-4191-bb0f-549feec7f559\") " Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.829627 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2820ae5-70dd-4191-bb0f-549feec7f559" (UID: "e2820ae5-70dd-4191-bb0f-549feec7f559"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.830308 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2820ae5-70dd-4191-bb0f-549feec7f559-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.839026 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2820ae5-70dd-4191-bb0f-549feec7f559" (UID: "e2820ae5-70dd-4191-bb0f-549feec7f559"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.839036 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w" (OuterVolumeSpecName: "kube-api-access-8f56w") pod "e2820ae5-70dd-4191-bb0f-549feec7f559" (UID: "e2820ae5-70dd-4191-bb0f-549feec7f559"). InnerVolumeSpecName "kube-api-access-8f56w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.932999 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f56w\" (UniqueName: \"kubernetes.io/projected/e2820ae5-70dd-4191-bb0f-549feec7f559-kube-api-access-8f56w\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:03 crc kubenswrapper[4847]: I0218 01:45:03.933035 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2820ae5-70dd-4191-bb0f-549feec7f559-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 01:45:04 crc kubenswrapper[4847]: I0218 01:45:04.252305 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" Feb 18 01:45:04 crc kubenswrapper[4847]: I0218 01:45:04.252285 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522985-kzdmd" event={"ID":"e2820ae5-70dd-4191-bb0f-549feec7f559","Type":"ContainerDied","Data":"43df1ff4689444811e29d38a67a3735fbd9736c117f7a418d83f3f4004221b50"} Feb 18 01:45:04 crc kubenswrapper[4847]: I0218 01:45:04.252391 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43df1ff4689444811e29d38a67a3735fbd9736c117f7a418d83f3f4004221b50" Feb 18 01:45:04 crc kubenswrapper[4847]: I0218 01:45:04.785104 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs"] Feb 18 01:45:04 crc kubenswrapper[4847]: I0218 01:45:04.798253 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522940-k8hqs"] Feb 18 01:45:05 crc kubenswrapper[4847]: I0218 01:45:05.426970 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c51f4019-3d36-45e9-a342-72e8b4ef9745" path="/var/lib/kubelet/pods/c51f4019-3d36-45e9-a342-72e8b4ef9745/volumes" Feb 18 01:45:10 crc kubenswrapper[4847]: E0218 01:45:10.408401 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:45:18 crc kubenswrapper[4847]: E0218 01:45:18.408263 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:45:22 crc kubenswrapper[4847]: E0218 01:45:22.408538 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:45:23 crc kubenswrapper[4847]: I0218 01:45:23.491448 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:45:23 crc kubenswrapper[4847]: I0218 01:45:23.491875 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:45:23 crc kubenswrapper[4847]: I0218 01:45:23.491944 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:45:23 crc kubenswrapper[4847]: I0218 01:45:23.493086 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:45:23 crc kubenswrapper[4847]: I0218 01:45:23.493189 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8" gracePeriod=600 Feb 18 01:45:24 crc kubenswrapper[4847]: I0218 01:45:24.510379 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8" exitCode=0 Feb 18 01:45:24 crc kubenswrapper[4847]: I0218 01:45:24.510587 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8"} Feb 18 01:45:24 crc kubenswrapper[4847]: I0218 01:45:24.511253 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09"} Feb 18 01:45:24 crc kubenswrapper[4847]: I0218 01:45:24.511289 4847 scope.go:117] "RemoveContainer" containerID="ee84539605b8cfafbf0327f5417b1c41aec29aa84230d041dd9e2d8bfef30271" Feb 18 01:45:33 crc kubenswrapper[4847]: E0218 01:45:33.413744 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:45:36 crc kubenswrapper[4847]: I0218 01:45:36.145330 4847 scope.go:117] "RemoveContainer" containerID="014b5ea421bfe6087923e1eb2f1b5498b3f427fab6627869a622130da144680f" Feb 18 01:45:36 crc kubenswrapper[4847]: E0218 01:45:36.406024 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:45:46 crc kubenswrapper[4847]: E0218 01:45:46.407560 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:45:47 crc kubenswrapper[4847]: E0218 01:45:47.421025 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:01 crc kubenswrapper[4847]: E0218 01:46:01.411076 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:46:02 crc kubenswrapper[4847]: E0218 01:46:02.406578 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:13 crc kubenswrapper[4847]: E0218 01:46:13.406826 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:14 crc kubenswrapper[4847]: E0218 01:46:14.406067 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:46:28 crc kubenswrapper[4847]: E0218 01:46:28.406375 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:29 crc kubenswrapper[4847]: E0218 01:46:29.408461 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:46:41 crc kubenswrapper[4847]: E0218 01:46:41.411942 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:43 crc kubenswrapper[4847]: E0218 01:46:43.406506 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:46:53 crc kubenswrapper[4847]: E0218 01:46:53.407143 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:46:55 crc kubenswrapper[4847]: E0218 01:46:55.406579 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:47:04 crc kubenswrapper[4847]: E0218 01:47:04.407985 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:47:07 crc kubenswrapper[4847]: E0218 01:47:07.415662 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:47:18 crc kubenswrapper[4847]: E0218 01:47:18.409971 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:47:19 crc kubenswrapper[4847]: E0218 01:47:19.407594 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:47:23 crc kubenswrapper[4847]: I0218 01:47:23.492732 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:47:23 crc kubenswrapper[4847]: I0218 01:47:23.493663 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:47:29 crc kubenswrapper[4847]: E0218 01:47:29.408509 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:47:30 crc kubenswrapper[4847]: E0218 01:47:30.408345 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:47:41 crc kubenswrapper[4847]: E0218 01:47:41.408287 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:47:44 crc kubenswrapper[4847]: E0218 01:47:44.407899 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:47:53 crc kubenswrapper[4847]: I0218 01:47:53.492201 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:47:53 crc kubenswrapper[4847]: I0218 01:47:53.492885 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:47:55 crc kubenswrapper[4847]: E0218 01:47:55.408255 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:47:56 crc kubenswrapper[4847]: E0218 01:47:56.408041 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:48:07 crc kubenswrapper[4847]: E0218 01:48:07.413742 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:48:11 crc kubenswrapper[4847]: E0218 01:48:11.408051 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:48:20 crc kubenswrapper[4847]: E0218 01:48:20.408445 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.491548 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.492162 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.492208 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.493171 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.493220 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" gracePeriod=600 Feb 18 01:48:23 crc kubenswrapper[4847]: E0218 01:48:23.615869 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.812893 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" exitCode=0 Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.813133 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09"} Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.813416 4847 scope.go:117] "RemoveContainer" containerID="ca68acc0e0e1f36d3124fc0c36ef9acdb34acca2526036bfff189dc4eb2f71c8" Feb 18 01:48:23 crc kubenswrapper[4847]: I0218 01:48:23.814397 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:48:23 crc kubenswrapper[4847]: E0218 01:48:23.814915 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:48:24 crc kubenswrapper[4847]: I0218 01:48:24.407742 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:48:24 crc kubenswrapper[4847]: E0218 01:48:24.538363 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:48:24 crc kubenswrapper[4847]: E0218 01:48:24.538800 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:48:24 crc kubenswrapper[4847]: E0218 01:48:24.538986 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:48:24 crc kubenswrapper[4847]: E0218 01:48:24.540250 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:48:34 crc kubenswrapper[4847]: E0218 01:48:34.406893 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:48:36 crc kubenswrapper[4847]: E0218 01:48:36.407096 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:48:38 crc kubenswrapper[4847]: I0218 01:48:38.405410 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:48:38 crc kubenswrapper[4847]: E0218 01:48:38.406561 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:48:47 crc kubenswrapper[4847]: E0218 01:48:47.423148 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:48:47 crc kubenswrapper[4847]: E0218 01:48:47.423155 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:48:51 crc kubenswrapper[4847]: I0218 01:48:51.404760 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:48:51 crc kubenswrapper[4847]: E0218 01:48:51.405413 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:48:59 crc kubenswrapper[4847]: E0218 01:48:59.406889 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:49:02 crc kubenswrapper[4847]: I0218 01:49:02.405085 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:49:02 crc kubenswrapper[4847]: E0218 01:49:02.405706 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:49:02 crc kubenswrapper[4847]: E0218 01:49:02.566978 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:49:02 crc kubenswrapper[4847]: E0218 01:49:02.567051 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:49:02 crc kubenswrapper[4847]: E0218 01:49:02.567197 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:49:02 crc kubenswrapper[4847]: E0218 01:49:02.568471 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:49:11 crc kubenswrapper[4847]: E0218 01:49:11.406956 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:49:14 crc kubenswrapper[4847]: I0218 01:49:14.404001 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:49:14 crc kubenswrapper[4847]: E0218 01:49:14.404987 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:49:15 crc kubenswrapper[4847]: E0218 01:49:15.406967 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:49:25 crc kubenswrapper[4847]: I0218 01:49:25.404758 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:49:25 crc kubenswrapper[4847]: E0218 01:49:25.405536 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:49:25 crc kubenswrapper[4847]: E0218 01:49:25.408470 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:49:27 crc kubenswrapper[4847]: E0218 01:49:27.427029 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:49:38 crc kubenswrapper[4847]: E0218 01:49:38.407498 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:49:40 crc kubenswrapper[4847]: I0218 01:49:40.404949 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:49:40 crc kubenswrapper[4847]: E0218 01:49:40.406103 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:49:42 crc kubenswrapper[4847]: E0218 01:49:42.408810 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:49:51 crc kubenswrapper[4847]: E0218 01:49:51.405727 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:49:53 crc kubenswrapper[4847]: I0218 01:49:53.404971 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:49:53 crc kubenswrapper[4847]: E0218 01:49:53.405513 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:49:53 crc kubenswrapper[4847]: E0218 01:49:53.410053 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:50:04 crc kubenswrapper[4847]: I0218 01:50:04.404969 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:50:04 crc kubenswrapper[4847]: E0218 01:50:04.407932 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:50:05 crc kubenswrapper[4847]: E0218 01:50:05.407996 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:50:08 crc kubenswrapper[4847]: E0218 01:50:08.407026 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:50:18 crc kubenswrapper[4847]: I0218 01:50:18.405660 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:50:18 crc kubenswrapper[4847]: E0218 01:50:18.407091 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:50:19 crc kubenswrapper[4847]: E0218 01:50:19.423087 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:50:23 crc kubenswrapper[4847]: E0218 01:50:23.407511 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:50:32 crc kubenswrapper[4847]: I0218 01:50:32.405077 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:50:32 crc kubenswrapper[4847]: E0218 01:50:32.407333 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:50:33 crc kubenswrapper[4847]: E0218 01:50:33.411354 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:50:34 crc kubenswrapper[4847]: E0218 01:50:34.410545 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:50:44 crc kubenswrapper[4847]: I0218 01:50:44.404809 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:50:44 crc kubenswrapper[4847]: E0218 01:50:44.405753 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:50:46 crc kubenswrapper[4847]: E0218 01:50:46.406865 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:50:47 crc kubenswrapper[4847]: E0218 01:50:47.421047 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:50:56 crc kubenswrapper[4847]: I0218 01:50:56.405363 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:50:56 crc kubenswrapper[4847]: E0218 01:50:56.406520 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:50:57 crc kubenswrapper[4847]: E0218 01:50:57.416290 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:01 crc kubenswrapper[4847]: E0218 01:51:01.408278 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:51:08 crc kubenswrapper[4847]: E0218 01:51:08.407711 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:09 crc kubenswrapper[4847]: I0218 01:51:09.404218 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:51:09 crc kubenswrapper[4847]: E0218 01:51:09.404708 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:51:15 crc kubenswrapper[4847]: E0218 01:51:15.406303 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:51:20 crc kubenswrapper[4847]: I0218 01:51:20.405479 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:51:20 crc kubenswrapper[4847]: E0218 01:51:20.406835 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:51:20 crc kubenswrapper[4847]: E0218 01:51:20.410659 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:30 crc kubenswrapper[4847]: E0218 01:51:30.408373 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:51:31 crc kubenswrapper[4847]: I0218 01:51:31.404984 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:51:31 crc kubenswrapper[4847]: E0218 01:51:31.405817 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:51:32 crc kubenswrapper[4847]: E0218 01:51:32.407323 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:42 crc kubenswrapper[4847]: I0218 01:51:42.405189 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:51:42 crc kubenswrapper[4847]: E0218 01:51:42.406456 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:51:44 crc kubenswrapper[4847]: E0218 01:51:44.425957 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:51:45 crc kubenswrapper[4847]: E0218 01:51:45.407749 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.404693 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:51:56 crc kubenswrapper[4847]: E0218 01:51:56.405295 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:51:56 crc kubenswrapper[4847]: E0218 01:51:56.407114 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.675760 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:51:56 crc kubenswrapper[4847]: E0218 01:51:56.676254 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2820ae5-70dd-4191-bb0f-549feec7f559" containerName="collect-profiles" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.676270 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2820ae5-70dd-4191-bb0f-549feec7f559" containerName="collect-profiles" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.676465 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2820ae5-70dd-4191-bb0f-549feec7f559" containerName="collect-profiles" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.677982 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.735562 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.738961 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tzd\" (UniqueName: \"kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.739099 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.739149 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.840637 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5tzd\" (UniqueName: \"kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.840822 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.840892 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.841367 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.841472 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:56 crc kubenswrapper[4847]: I0218 01:51:56.867749 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5tzd\" (UniqueName: \"kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd\") pod \"redhat-marketplace-nslhb\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:57 crc kubenswrapper[4847]: I0218 01:51:57.002052 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:51:57 crc kubenswrapper[4847]: W0218 01:51:57.491204 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedb18300_ba56_4f28_8b59_4ab2908f17ac.slice/crio-565783fb3c92a0881383d1f04fb7a662a6d43ff81421b5c4616cb471471b5eec WatchSource:0}: Error finding container 565783fb3c92a0881383d1f04fb7a662a6d43ff81421b5c4616cb471471b5eec: Status 404 returned error can't find the container with id 565783fb3c92a0881383d1f04fb7a662a6d43ff81421b5c4616cb471471b5eec Feb 18 01:51:57 crc kubenswrapper[4847]: I0218 01:51:57.493689 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:51:58 crc kubenswrapper[4847]: E0218 01:51:58.406442 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:51:58 crc kubenswrapper[4847]: I0218 01:51:58.501780 4847 generic.go:334] "Generic (PLEG): container finished" podID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerID="279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c" exitCode=0 Feb 18 01:51:58 crc kubenswrapper[4847]: I0218 01:51:58.501874 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerDied","Data":"279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c"} Feb 18 01:51:58 crc kubenswrapper[4847]: I0218 01:51:58.501971 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerStarted","Data":"565783fb3c92a0881383d1f04fb7a662a6d43ff81421b5c4616cb471471b5eec"} Feb 18 01:51:59 crc kubenswrapper[4847]: I0218 01:51:59.517291 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerStarted","Data":"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3"} Feb 18 01:52:00 crc kubenswrapper[4847]: I0218 01:52:00.533678 4847 generic.go:334] "Generic (PLEG): container finished" podID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerID="3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3" exitCode=0 Feb 18 01:52:00 crc kubenswrapper[4847]: I0218 01:52:00.533724 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerDied","Data":"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3"} Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.558129 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerStarted","Data":"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273"} Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.582535 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nslhb" podStartSLOduration=3.394698748 podStartE2EDuration="6.582513302s" podCreationTimestamp="2026-02-18 01:51:56 +0000 UTC" firstStartedPulling="2026-02-18 01:51:58.505560958 +0000 UTC m=+5191.882911940" lastFinishedPulling="2026-02-18 01:52:01.693375522 +0000 UTC m=+5195.070726494" observedRunningTime="2026-02-18 01:52:02.582147723 +0000 UTC m=+5195.959498705" watchObservedRunningTime="2026-02-18 01:52:02.582513302 +0000 UTC m=+5195.959864254" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.650429 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.657001 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.684049 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xvwm\" (UniqueName: \"kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.684264 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.684298 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.687233 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.792021 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.792085 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.792216 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xvwm\" (UniqueName: \"kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.792755 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.792862 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.823860 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xvwm\" (UniqueName: \"kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm\") pod \"redhat-operators-knrjh\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:02 crc kubenswrapper[4847]: I0218 01:52:02.981587 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:03 crc kubenswrapper[4847]: W0218 01:52:03.521986 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e6d068_2fcf_46d8_aa9e_b85db52ae378.slice/crio-6d2497eb443d5009f0bd3651db0e50490cf0ca56649f4b69e8bc86ad84ffa357 WatchSource:0}: Error finding container 6d2497eb443d5009f0bd3651db0e50490cf0ca56649f4b69e8bc86ad84ffa357: Status 404 returned error can't find the container with id 6d2497eb443d5009f0bd3651db0e50490cf0ca56649f4b69e8bc86ad84ffa357 Feb 18 01:52:03 crc kubenswrapper[4847]: I0218 01:52:03.532458 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:03 crc kubenswrapper[4847]: I0218 01:52:03.568754 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerStarted","Data":"6d2497eb443d5009f0bd3651db0e50490cf0ca56649f4b69e8bc86ad84ffa357"} Feb 18 01:52:04 crc kubenswrapper[4847]: I0218 01:52:04.583956 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerID="aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878" exitCode=0 Feb 18 01:52:04 crc kubenswrapper[4847]: I0218 01:52:04.584006 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerDied","Data":"aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878"} Feb 18 01:52:06 crc kubenswrapper[4847]: I0218 01:52:06.623483 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerStarted","Data":"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c"} Feb 18 01:52:07 crc kubenswrapper[4847]: I0218 01:52:07.020518 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:07 crc kubenswrapper[4847]: I0218 01:52:07.024446 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:07 crc kubenswrapper[4847]: I0218 01:52:07.744193 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:08 crc kubenswrapper[4847]: I0218 01:52:08.738613 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:09 crc kubenswrapper[4847]: I0218 01:52:09.245179 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:52:09 crc kubenswrapper[4847]: I0218 01:52:09.405004 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:52:09 crc kubenswrapper[4847]: E0218 01:52:09.405530 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:52:09 crc kubenswrapper[4847]: I0218 01:52:09.667794 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerID="919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c" exitCode=0 Feb 18 01:52:09 crc kubenswrapper[4847]: I0218 01:52:09.667883 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerDied","Data":"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c"} Feb 18 01:52:10 crc kubenswrapper[4847]: E0218 01:52:10.410180 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:52:10 crc kubenswrapper[4847]: I0218 01:52:10.680339 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nslhb" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="registry-server" containerID="cri-o://c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273" gracePeriod=2 Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.396105 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.532862 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities\") pod \"edb18300-ba56-4f28-8b59-4ab2908f17ac\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.533014 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5tzd\" (UniqueName: \"kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd\") pod \"edb18300-ba56-4f28-8b59-4ab2908f17ac\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.533050 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content\") pod \"edb18300-ba56-4f28-8b59-4ab2908f17ac\" (UID: \"edb18300-ba56-4f28-8b59-4ab2908f17ac\") " Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.533774 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities" (OuterVolumeSpecName: "utilities") pod "edb18300-ba56-4f28-8b59-4ab2908f17ac" (UID: "edb18300-ba56-4f28-8b59-4ab2908f17ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.539003 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd" (OuterVolumeSpecName: "kube-api-access-n5tzd") pod "edb18300-ba56-4f28-8b59-4ab2908f17ac" (UID: "edb18300-ba56-4f28-8b59-4ab2908f17ac"). InnerVolumeSpecName "kube-api-access-n5tzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.563673 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edb18300-ba56-4f28-8b59-4ab2908f17ac" (UID: "edb18300-ba56-4f28-8b59-4ab2908f17ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.635096 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5tzd\" (UniqueName: \"kubernetes.io/projected/edb18300-ba56-4f28-8b59-4ab2908f17ac-kube-api-access-n5tzd\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.635135 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.635148 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edb18300-ba56-4f28-8b59-4ab2908f17ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.694666 4847 generic.go:334] "Generic (PLEG): container finished" podID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerID="c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273" exitCode=0 Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.694731 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerDied","Data":"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273"} Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.694800 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nslhb" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.695058 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nslhb" event={"ID":"edb18300-ba56-4f28-8b59-4ab2908f17ac","Type":"ContainerDied","Data":"565783fb3c92a0881383d1f04fb7a662a6d43ff81421b5c4616cb471471b5eec"} Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.695091 4847 scope.go:117] "RemoveContainer" containerID="c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.698097 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerStarted","Data":"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00"} Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.735259 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-knrjh" podStartSLOduration=3.917794294 podStartE2EDuration="9.735235089s" podCreationTimestamp="2026-02-18 01:52:02 +0000 UTC" firstStartedPulling="2026-02-18 01:52:04.587446294 +0000 UTC m=+5197.964797276" lastFinishedPulling="2026-02-18 01:52:10.404887099 +0000 UTC m=+5203.782238071" observedRunningTime="2026-02-18 01:52:11.714295923 +0000 UTC m=+5205.091646875" watchObservedRunningTime="2026-02-18 01:52:11.735235089 +0000 UTC m=+5205.112586051" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.737905 4847 scope.go:117] "RemoveContainer" containerID="3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.755007 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.768149 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nslhb"] Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.778369 4847 scope.go:117] "RemoveContainer" containerID="279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.811525 4847 scope.go:117] "RemoveContainer" containerID="c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273" Feb 18 01:52:11 crc kubenswrapper[4847]: E0218 01:52:11.811925 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273\": container with ID starting with c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273 not found: ID does not exist" containerID="c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.811973 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273"} err="failed to get container status \"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273\": rpc error: code = NotFound desc = could not find container \"c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273\": container with ID starting with c51bac9419128faa90afceb1ab37d0204cc8b2473a14ec401e27fcce85bea273 not found: ID does not exist" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.812008 4847 scope.go:117] "RemoveContainer" containerID="3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3" Feb 18 01:52:11 crc kubenswrapper[4847]: E0218 01:52:11.812253 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3\": container with ID starting with 3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3 not found: ID does not exist" containerID="3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.812285 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3"} err="failed to get container status \"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3\": rpc error: code = NotFound desc = could not find container \"3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3\": container with ID starting with 3ec7876f18f59c941ca4f95cbfdaf5187af5d45e09d77b8ff57930e578fb26f3 not found: ID does not exist" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.812311 4847 scope.go:117] "RemoveContainer" containerID="279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c" Feb 18 01:52:11 crc kubenswrapper[4847]: E0218 01:52:11.812650 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c\": container with ID starting with 279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c not found: ID does not exist" containerID="279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c" Feb 18 01:52:11 crc kubenswrapper[4847]: I0218 01:52:11.812697 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c"} err="failed to get container status \"279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c\": rpc error: code = NotFound desc = could not find container \"279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c\": container with ID starting with 279772705a53a00ec71387276b2b81be364189c711faecada72e5fbd9074392c not found: ID does not exist" Feb 18 01:52:12 crc kubenswrapper[4847]: E0218 01:52:12.405915 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:52:12 crc kubenswrapper[4847]: I0218 01:52:12.982066 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:12 crc kubenswrapper[4847]: I0218 01:52:12.982499 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:13 crc kubenswrapper[4847]: I0218 01:52:13.429133 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" path="/var/lib/kubelet/pods/edb18300-ba56-4f28-8b59-4ab2908f17ac/volumes" Feb 18 01:52:14 crc kubenswrapper[4847]: I0218 01:52:14.057499 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-knrjh" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="registry-server" probeResult="failure" output=< Feb 18 01:52:14 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:52:14 crc kubenswrapper[4847]: > Feb 18 01:52:22 crc kubenswrapper[4847]: I0218 01:52:22.405632 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:52:22 crc kubenswrapper[4847]: E0218 01:52:22.406400 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:52:23 crc kubenswrapper[4847]: I0218 01:52:23.031905 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:23 crc kubenswrapper[4847]: I0218 01:52:23.095147 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:23 crc kubenswrapper[4847]: I0218 01:52:23.281691 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:23 crc kubenswrapper[4847]: E0218 01:52:23.408651 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:52:24 crc kubenswrapper[4847]: I0218 01:52:24.861041 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-knrjh" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="registry-server" containerID="cri-o://8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00" gracePeriod=2 Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.399745 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.444796 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content\") pod \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.445139 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xvwm\" (UniqueName: \"kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm\") pod \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.445207 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities\") pod \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\" (UID: \"a0e6d068-2fcf-46d8-aa9e-b85db52ae378\") " Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.446544 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities" (OuterVolumeSpecName: "utilities") pod "a0e6d068-2fcf-46d8-aa9e-b85db52ae378" (UID: "a0e6d068-2fcf-46d8-aa9e-b85db52ae378"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.458880 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm" (OuterVolumeSpecName: "kube-api-access-2xvwm") pod "a0e6d068-2fcf-46d8-aa9e-b85db52ae378" (UID: "a0e6d068-2fcf-46d8-aa9e-b85db52ae378"). InnerVolumeSpecName "kube-api-access-2xvwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.548577 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xvwm\" (UniqueName: \"kubernetes.io/projected/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-kube-api-access-2xvwm\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.548631 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.595769 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0e6d068-2fcf-46d8-aa9e-b85db52ae378" (UID: "a0e6d068-2fcf-46d8-aa9e-b85db52ae378"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.650778 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e6d068-2fcf-46d8-aa9e-b85db52ae378-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.879245 4847 generic.go:334] "Generic (PLEG): container finished" podID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerID="8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00" exitCode=0 Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.879309 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerDied","Data":"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00"} Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.879353 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-knrjh" event={"ID":"a0e6d068-2fcf-46d8-aa9e-b85db52ae378","Type":"ContainerDied","Data":"6d2497eb443d5009f0bd3651db0e50490cf0ca56649f4b69e8bc86ad84ffa357"} Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.879383 4847 scope.go:117] "RemoveContainer" containerID="8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.879715 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-knrjh" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.922046 4847 scope.go:117] "RemoveContainer" containerID="919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c" Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.934330 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.948948 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-knrjh"] Feb 18 01:52:25 crc kubenswrapper[4847]: I0218 01:52:25.959256 4847 scope.go:117] "RemoveContainer" containerID="aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.008570 4847 scope.go:117] "RemoveContainer" containerID="8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00" Feb 18 01:52:26 crc kubenswrapper[4847]: E0218 01:52:26.009040 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00\": container with ID starting with 8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00 not found: ID does not exist" containerID="8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.009067 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00"} err="failed to get container status \"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00\": rpc error: code = NotFound desc = could not find container \"8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00\": container with ID starting with 8d84d326d7ca466eb46ad811b0cb68d303bc54a5149f500c375bec69b5cdcf00 not found: ID does not exist" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.009087 4847 scope.go:117] "RemoveContainer" containerID="919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c" Feb 18 01:52:26 crc kubenswrapper[4847]: E0218 01:52:26.009756 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c\": container with ID starting with 919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c not found: ID does not exist" containerID="919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.009808 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c"} err="failed to get container status \"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c\": rpc error: code = NotFound desc = could not find container \"919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c\": container with ID starting with 919ebf4e04a6bc20c01dde62762bed6640da1262d6f8982896c5c5650519703c not found: ID does not exist" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.009841 4847 scope.go:117] "RemoveContainer" containerID="aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878" Feb 18 01:52:26 crc kubenswrapper[4847]: E0218 01:52:26.010315 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878\": container with ID starting with aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878 not found: ID does not exist" containerID="aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878" Feb 18 01:52:26 crc kubenswrapper[4847]: I0218 01:52:26.010412 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878"} err="failed to get container status \"aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878\": rpc error: code = NotFound desc = could not find container \"aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878\": container with ID starting with aa6daed0c81a2556006db52294115017076f91c3f58dfaff4a54043a720b3878 not found: ID does not exist" Feb 18 01:52:26 crc kubenswrapper[4847]: E0218 01:52:26.406876 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:52:27 crc kubenswrapper[4847]: I0218 01:52:27.428693 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" path="/var/lib/kubelet/pods/a0e6d068-2fcf-46d8-aa9e-b85db52ae378/volumes" Feb 18 01:52:34 crc kubenswrapper[4847]: E0218 01:52:34.407941 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:52:37 crc kubenswrapper[4847]: I0218 01:52:37.419412 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:52:37 crc kubenswrapper[4847]: E0218 01:52:37.422905 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:52:40 crc kubenswrapper[4847]: E0218 01:52:40.408927 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.117701 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-69r5f/must-gather-gssps"] Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.118957 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="extract-content" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.118972 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="extract-content" Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.118983 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="extract-utilities" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.118990 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="extract-utilities" Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.119004 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="extract-utilities" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119009 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="extract-utilities" Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.119031 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119036 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.119049 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119054 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: E0218 01:52:44.119080 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="extract-content" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119086 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="extract-content" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119273 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb18300-ba56-4f28-8b59-4ab2908f17ac" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.119291 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e6d068-2fcf-46d8-aa9e-b85db52ae378" containerName="registry-server" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.121164 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.123905 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-69r5f"/"kube-root-ca.crt" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.124050 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-69r5f"/"openshift-service-ca.crt" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.144730 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-69r5f/must-gather-gssps"] Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.230637 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.230915 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jhx\" (UniqueName: \"kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.332943 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.333063 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jhx\" (UniqueName: \"kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.333785 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.352626 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jhx\" (UniqueName: \"kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx\") pod \"must-gather-gssps\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.439121 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 01:52:44 crc kubenswrapper[4847]: I0218 01:52:44.918563 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-69r5f/must-gather-gssps"] Feb 18 01:52:45 crc kubenswrapper[4847]: I0218 01:52:45.148477 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/must-gather-gssps" event={"ID":"98e7900f-9560-4111-a5fd-40d31cab3a0b","Type":"ContainerStarted","Data":"0234abedaabdfa28cb8dba2170287b63d47905fe0fcd061e131a87151ae5b03a"} Feb 18 01:52:49 crc kubenswrapper[4847]: E0218 01:52:49.406948 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:52:50 crc kubenswrapper[4847]: I0218 01:52:50.404969 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:52:50 crc kubenswrapper[4847]: E0218 01:52:50.405559 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:52:51 crc kubenswrapper[4847]: E0218 01:52:51.767827 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:52:53 crc kubenswrapper[4847]: I0218 01:52:53.826522 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/must-gather-gssps" event={"ID":"98e7900f-9560-4111-a5fd-40d31cab3a0b","Type":"ContainerStarted","Data":"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989"} Feb 18 01:52:54 crc kubenswrapper[4847]: I0218 01:52:54.843984 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/must-gather-gssps" event={"ID":"98e7900f-9560-4111-a5fd-40d31cab3a0b","Type":"ContainerStarted","Data":"c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07"} Feb 18 01:52:54 crc kubenswrapper[4847]: I0218 01:52:54.878944 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-69r5f/must-gather-gssps" podStartSLOduration=2.298860077 podStartE2EDuration="10.878918194s" podCreationTimestamp="2026-02-18 01:52:44 +0000 UTC" firstStartedPulling="2026-02-18 01:52:44.925296422 +0000 UTC m=+5238.302647364" lastFinishedPulling="2026-02-18 01:52:53.505354539 +0000 UTC m=+5246.882705481" observedRunningTime="2026-02-18 01:52:54.864946439 +0000 UTC m=+5248.242297441" watchObservedRunningTime="2026-02-18 01:52:54.878918194 +0000 UTC m=+5248.256269166" Feb 18 01:52:57 crc kubenswrapper[4847]: E0218 01:52:57.876415 4847 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.80:38364->38.102.83.80:41687: read tcp 38.102.83.80:38364->38.102.83.80:41687: read: connection reset by peer Feb 18 01:52:58 crc kubenswrapper[4847]: I0218 01:52:58.847351 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-69r5f/crc-debug-7zv6t"] Feb 18 01:52:58 crc kubenswrapper[4847]: I0218 01:52:58.848940 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:58 crc kubenswrapper[4847]: I0218 01:52:58.851661 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-69r5f"/"default-dockercfg-vfv78" Feb 18 01:52:58 crc kubenswrapper[4847]: I0218 01:52:58.998360 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:58 crc kubenswrapper[4847]: I0218 01:52:58.998520 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hml\" (UniqueName: \"kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.100365 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.100463 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2hml\" (UniqueName: \"kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.100838 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.123006 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2hml\" (UniqueName: \"kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml\") pod \"crc-debug-7zv6t\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.167291 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:52:59 crc kubenswrapper[4847]: W0218 01:52:59.203148 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2c244ed_05a5_4951_8304_fb45d3a8a55c.slice/crio-d2233f80d9b1cf42f86b7a57c76a0d5476c75256e32b9aca8bbb28e4757592ba WatchSource:0}: Error finding container d2233f80d9b1cf42f86b7a57c76a0d5476c75256e32b9aca8bbb28e4757592ba: Status 404 returned error can't find the container with id d2233f80d9b1cf42f86b7a57c76a0d5476c75256e32b9aca8bbb28e4757592ba Feb 18 01:52:59 crc kubenswrapper[4847]: I0218 01:52:59.901755 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" event={"ID":"b2c244ed-05a5-4951-8304-fb45d3a8a55c","Type":"ContainerStarted","Data":"d2233f80d9b1cf42f86b7a57c76a0d5476c75256e32b9aca8bbb28e4757592ba"} Feb 18 01:53:01 crc kubenswrapper[4847]: E0218 01:53:01.406196 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:53:03 crc kubenswrapper[4847]: I0218 01:53:03.404714 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:53:03 crc kubenswrapper[4847]: E0218 01:53:03.405510 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:53:03 crc kubenswrapper[4847]: E0218 01:53:03.406862 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:53:11 crc kubenswrapper[4847]: I0218 01:53:11.022739 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" event={"ID":"b2c244ed-05a5-4951-8304-fb45d3a8a55c","Type":"ContainerStarted","Data":"37b70171e2fc0992e133544869ab526ad90326e2f51615efe80c8b597a40eec7"} Feb 18 01:53:12 crc kubenswrapper[4847]: E0218 01:53:12.409181 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:53:14 crc kubenswrapper[4847]: I0218 01:53:14.405368 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:53:14 crc kubenswrapper[4847]: E0218 01:53:14.406210 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:53:18 crc kubenswrapper[4847]: E0218 01:53:18.411649 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.390763 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" podStartSLOduration=10.712542755 podStartE2EDuration="21.390734519s" podCreationTimestamp="2026-02-18 01:52:58 +0000 UTC" firstStartedPulling="2026-02-18 01:52:59.205582236 +0000 UTC m=+5252.582933178" lastFinishedPulling="2026-02-18 01:53:09.883774 +0000 UTC m=+5263.261124942" observedRunningTime="2026-02-18 01:53:11.060237504 +0000 UTC m=+5264.437588486" watchObservedRunningTime="2026-02-18 01:53:19.390734519 +0000 UTC m=+5272.768085471" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.394855 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgtdv"] Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.397723 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.440170 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgtdv"] Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.551615 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-utilities\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.552043 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfm5d\" (UniqueName: \"kubernetes.io/projected/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-kube-api-access-tfm5d\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.552126 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-catalog-content\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.581536 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.584079 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.592822 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.654116 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfm5d\" (UniqueName: \"kubernetes.io/projected/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-kube-api-access-tfm5d\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.654273 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-catalog-content\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.654310 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-utilities\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.654901 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-utilities\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.655463 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-catalog-content\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.691359 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfm5d\" (UniqueName: \"kubernetes.io/projected/41aa5b5e-b48f-4cee-8f37-6f0229e3766a-kube-api-access-tfm5d\") pod \"community-operators-jgtdv\" (UID: \"41aa5b5e-b48f-4cee-8f37-6f0229e3766a\") " pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.755778 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.756230 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.756366 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.756772 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdff5\" (UniqueName: \"kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.859467 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.859512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.859564 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdff5\" (UniqueName: \"kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.859989 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.860051 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.883770 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdff5\" (UniqueName: \"kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5\") pod \"certified-operators-7sccw\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:19 crc kubenswrapper[4847]: I0218 01:53:19.915708 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:20 crc kubenswrapper[4847]: I0218 01:53:20.399724 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgtdv"] Feb 18 01:53:20 crc kubenswrapper[4847]: I0218 01:53:20.615308 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:21 crc kubenswrapper[4847]: I0218 01:53:21.146527 4847 generic.go:334] "Generic (PLEG): container finished" podID="41aa5b5e-b48f-4cee-8f37-6f0229e3766a" containerID="d0830d3c1df0cad23367b954d1ca6880c302144452c1063e2381a74d3f726a1f" exitCode=0 Feb 18 01:53:21 crc kubenswrapper[4847]: I0218 01:53:21.146577 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgtdv" event={"ID":"41aa5b5e-b48f-4cee-8f37-6f0229e3766a","Type":"ContainerDied","Data":"d0830d3c1df0cad23367b954d1ca6880c302144452c1063e2381a74d3f726a1f"} Feb 18 01:53:21 crc kubenswrapper[4847]: I0218 01:53:21.146825 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgtdv" event={"ID":"41aa5b5e-b48f-4cee-8f37-6f0229e3766a","Type":"ContainerStarted","Data":"d988d7f19d82f07ebc0b38cc8525016ff73d1b8ce998558ce92edf490b5aa539"} Feb 18 01:53:21 crc kubenswrapper[4847]: I0218 01:53:21.148940 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerStarted","Data":"84022778fc12b04c231df134483a7ff88d80e6e6436248ae4a7b652507908e02"} Feb 18 01:53:22 crc kubenswrapper[4847]: I0218 01:53:22.174006 4847 generic.go:334] "Generic (PLEG): container finished" podID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerID="03feefa14eb1ed184c950687b841b1141fbdc355c498528ef21f02c762bfa5d9" exitCode=0 Feb 18 01:53:22 crc kubenswrapper[4847]: I0218 01:53:22.174548 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerDied","Data":"03feefa14eb1ed184c950687b841b1141fbdc355c498528ef21f02c762bfa5d9"} Feb 18 01:53:25 crc kubenswrapper[4847]: I0218 01:53:25.207142 4847 generic.go:334] "Generic (PLEG): container finished" podID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerID="e9b82726174252ef52a4e85852c4dae61bcc2cdbcd62bc4cb91c5323eab0cf4f" exitCode=0 Feb 18 01:53:25 crc kubenswrapper[4847]: I0218 01:53:25.207668 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerDied","Data":"e9b82726174252ef52a4e85852c4dae61bcc2cdbcd62bc4cb91c5323eab0cf4f"} Feb 18 01:53:26 crc kubenswrapper[4847]: I0218 01:53:26.412468 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:53:26 crc kubenswrapper[4847]: I0218 01:53:26.915048 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:53:27 crc kubenswrapper[4847]: I0218 01:53:27.232012 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgtdv" event={"ID":"41aa5b5e-b48f-4cee-8f37-6f0229e3766a","Type":"ContainerStarted","Data":"85db1735954da54bc26bc555aaf55a62f5209dba0731e9e8d90d50e58b73012d"} Feb 18 01:53:27 crc kubenswrapper[4847]: E0218 01:53:27.417037 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:53:27 crc kubenswrapper[4847]: E0218 01:53:27.704394 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41aa5b5e_b48f_4cee_8f37_6f0229e3766a.slice/crio-conmon-85db1735954da54bc26bc555aaf55a62f5209dba0731e9e8d90d50e58b73012d.scope\": RecentStats: unable to find data in memory cache]" Feb 18 01:53:28 crc kubenswrapper[4847]: I0218 01:53:28.256102 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerStarted","Data":"a073258b111e53d468d02e65adefbfdb9bf2b7e7d03a857dfc05635f59f27be9"} Feb 18 01:53:28 crc kubenswrapper[4847]: I0218 01:53:28.259746 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c"} Feb 18 01:53:28 crc kubenswrapper[4847]: I0218 01:53:28.262660 4847 generic.go:334] "Generic (PLEG): container finished" podID="41aa5b5e-b48f-4cee-8f37-6f0229e3766a" containerID="85db1735954da54bc26bc555aaf55a62f5209dba0731e9e8d90d50e58b73012d" exitCode=0 Feb 18 01:53:28 crc kubenswrapper[4847]: I0218 01:53:28.262706 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgtdv" event={"ID":"41aa5b5e-b48f-4cee-8f37-6f0229e3766a","Type":"ContainerDied","Data":"85db1735954da54bc26bc555aaf55a62f5209dba0731e9e8d90d50e58b73012d"} Feb 18 01:53:28 crc kubenswrapper[4847]: I0218 01:53:28.286332 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sccw" podStartSLOduration=4.099177839 podStartE2EDuration="9.286311345s" podCreationTimestamp="2026-02-18 01:53:19 +0000 UTC" firstStartedPulling="2026-02-18 01:53:22.177753662 +0000 UTC m=+5275.555104604" lastFinishedPulling="2026-02-18 01:53:27.364887168 +0000 UTC m=+5280.742238110" observedRunningTime="2026-02-18 01:53:28.279067617 +0000 UTC m=+5281.656418559" watchObservedRunningTime="2026-02-18 01:53:28.286311345 +0000 UTC m=+5281.663662307" Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.275508 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgtdv" event={"ID":"41aa5b5e-b48f-4cee-8f37-6f0229e3766a","Type":"ContainerStarted","Data":"c11f645795a17459f5cde284f014cec5dd1dc04bd1387d2448ec94981ac7427f"} Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.295832 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgtdv" podStartSLOduration=2.973130247 podStartE2EDuration="10.295811824s" podCreationTimestamp="2026-02-18 01:53:19 +0000 UTC" firstStartedPulling="2026-02-18 01:53:21.388520073 +0000 UTC m=+5274.765871015" lastFinishedPulling="2026-02-18 01:53:28.71120165 +0000 UTC m=+5282.088552592" observedRunningTime="2026-02-18 01:53:29.29118428 +0000 UTC m=+5282.668535222" watchObservedRunningTime="2026-02-18 01:53:29.295811824 +0000 UTC m=+5282.673162776" Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.757473 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.757786 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.916383 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:29 crc kubenswrapper[4847]: I0218 01:53:29.917373 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:30 crc kubenswrapper[4847]: I0218 01:53:30.290387 4847 generic.go:334] "Generic (PLEG): container finished" podID="b2c244ed-05a5-4951-8304-fb45d3a8a55c" containerID="37b70171e2fc0992e133544869ab526ad90326e2f51615efe80c8b597a40eec7" exitCode=0 Feb 18 01:53:30 crc kubenswrapper[4847]: I0218 01:53:30.290481 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" event={"ID":"b2c244ed-05a5-4951-8304-fb45d3a8a55c","Type":"ContainerDied","Data":"37b70171e2fc0992e133544869ab526ad90326e2f51615efe80c8b597a40eec7"} Feb 18 01:53:30 crc kubenswrapper[4847]: E0218 01:53:30.535577 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:53:30 crc kubenswrapper[4847]: E0218 01:53:30.535837 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:53:30 crc kubenswrapper[4847]: E0218 01:53:30.535941 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:53:30 crc kubenswrapper[4847]: E0218 01:53:30.537094 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:53:30 crc kubenswrapper[4847]: I0218 01:53:30.818722 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jgtdv" podUID="41aa5b5e-b48f-4cee-8f37-6f0229e3766a" containerName="registry-server" probeResult="failure" output=< Feb 18 01:53:30 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:53:30 crc kubenswrapper[4847]: > Feb 18 01:53:30 crc kubenswrapper[4847]: I0218 01:53:30.961644 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-7sccw" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="registry-server" probeResult="failure" output=< Feb 18 01:53:30 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 01:53:30 crc kubenswrapper[4847]: > Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.444148 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.480220 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-69r5f/crc-debug-7zv6t"] Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.493505 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-69r5f/crc-debug-7zv6t"] Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.552186 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2hml\" (UniqueName: \"kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml\") pod \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.552732 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host\") pod \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\" (UID: \"b2c244ed-05a5-4951-8304-fb45d3a8a55c\") " Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.552807 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host" (OuterVolumeSpecName: "host") pod "b2c244ed-05a5-4951-8304-fb45d3a8a55c" (UID: "b2c244ed-05a5-4951-8304-fb45d3a8a55c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.553400 4847 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b2c244ed-05a5-4951-8304-fb45d3a8a55c-host\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.560814 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml" (OuterVolumeSpecName: "kube-api-access-m2hml") pod "b2c244ed-05a5-4951-8304-fb45d3a8a55c" (UID: "b2c244ed-05a5-4951-8304-fb45d3a8a55c"). InnerVolumeSpecName "kube-api-access-m2hml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:53:31 crc kubenswrapper[4847]: I0218 01:53:31.655599 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2hml\" (UniqueName: \"kubernetes.io/projected/b2c244ed-05a5-4951-8304-fb45d3a8a55c-kube-api-access-m2hml\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.312277 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2233f80d9b1cf42f86b7a57c76a0d5476c75256e32b9aca8bbb28e4757592ba" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.312326 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-7zv6t" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.892324 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-69r5f/crc-debug-m2d7d"] Feb 18 01:53:32 crc kubenswrapper[4847]: E0218 01:53:32.893046 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c244ed-05a5-4951-8304-fb45d3a8a55c" containerName="container-00" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.893058 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c244ed-05a5-4951-8304-fb45d3a8a55c" containerName="container-00" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.893288 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c244ed-05a5-4951-8304-fb45d3a8a55c" containerName="container-00" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.894044 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:32 crc kubenswrapper[4847]: I0218 01:53:32.896404 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-69r5f"/"default-dockercfg-vfv78" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.083545 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.084038 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z6z5\" (UniqueName: \"kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.186256 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z6z5\" (UniqueName: \"kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.186408 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.186547 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.204068 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z6z5\" (UniqueName: \"kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5\") pod \"crc-debug-m2d7d\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.210199 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.322964 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" event={"ID":"6ba90b9d-26a6-4184-b27b-303825add8a9","Type":"ContainerStarted","Data":"2db80d70f95d2443c4a709853f2e67e51366ee924c11a4ade45a02034bf8d0ab"} Feb 18 01:53:33 crc kubenswrapper[4847]: I0218 01:53:33.417380 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c244ed-05a5-4951-8304-fb45d3a8a55c" path="/var/lib/kubelet/pods/b2c244ed-05a5-4951-8304-fb45d3a8a55c/volumes" Feb 18 01:53:34 crc kubenswrapper[4847]: I0218 01:53:34.340349 4847 generic.go:334] "Generic (PLEG): container finished" podID="6ba90b9d-26a6-4184-b27b-303825add8a9" containerID="6375d39f78d40dbad1eaef9189ac27c687cdd6ac19709a896759e4a33b857aa3" exitCode=1 Feb 18 01:53:34 crc kubenswrapper[4847]: I0218 01:53:34.340439 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" event={"ID":"6ba90b9d-26a6-4184-b27b-303825add8a9","Type":"ContainerDied","Data":"6375d39f78d40dbad1eaef9189ac27c687cdd6ac19709a896759e4a33b857aa3"} Feb 18 01:53:34 crc kubenswrapper[4847]: I0218 01:53:34.381106 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-69r5f/crc-debug-m2d7d"] Feb 18 01:53:34 crc kubenswrapper[4847]: I0218 01:53:34.390863 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-69r5f/crc-debug-m2d7d"] Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.464457 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.638573 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z6z5\" (UniqueName: \"kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5\") pod \"6ba90b9d-26a6-4184-b27b-303825add8a9\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.638758 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host\") pod \"6ba90b9d-26a6-4184-b27b-303825add8a9\" (UID: \"6ba90b9d-26a6-4184-b27b-303825add8a9\") " Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.638890 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host" (OuterVolumeSpecName: "host") pod "6ba90b9d-26a6-4184-b27b-303825add8a9" (UID: "6ba90b9d-26a6-4184-b27b-303825add8a9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.639206 4847 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ba90b9d-26a6-4184-b27b-303825add8a9-host\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.655776 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5" (OuterVolumeSpecName: "kube-api-access-8z6z5") pod "6ba90b9d-26a6-4184-b27b-303825add8a9" (UID: "6ba90b9d-26a6-4184-b27b-303825add8a9"). InnerVolumeSpecName "kube-api-access-8z6z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:53:35 crc kubenswrapper[4847]: I0218 01:53:35.740871 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z6z5\" (UniqueName: \"kubernetes.io/projected/6ba90b9d-26a6-4184-b27b-303825add8a9-kube-api-access-8z6z5\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:36 crc kubenswrapper[4847]: I0218 01:53:36.359429 4847 scope.go:117] "RemoveContainer" containerID="6375d39f78d40dbad1eaef9189ac27c687cdd6ac19709a896759e4a33b857aa3" Feb 18 01:53:36 crc kubenswrapper[4847]: I0218 01:53:36.359477 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/crc-debug-m2d7d" Feb 18 01:53:37 crc kubenswrapper[4847]: I0218 01:53:37.424861 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba90b9d-26a6-4184-b27b-303825add8a9" path="/var/lib/kubelet/pods/6ba90b9d-26a6-4184-b27b-303825add8a9/volumes" Feb 18 01:53:39 crc kubenswrapper[4847]: I0218 01:53:39.815738 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:39 crc kubenswrapper[4847]: I0218 01:53:39.874472 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgtdv" Feb 18 01:53:39 crc kubenswrapper[4847]: I0218 01:53:39.944150 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgtdv"] Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.001692 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.058795 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.059071 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pw6fv" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="registry-server" containerID="cri-o://916c16f515c925491723ea2faf51ec0f63fa990e9f57f0c15c884855b4a116c5" gracePeriod=2 Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.064938 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:40 crc kubenswrapper[4847]: E0218 01:53:40.420101 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.435754 4847 generic.go:334] "Generic (PLEG): container finished" podID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerID="916c16f515c925491723ea2faf51ec0f63fa990e9f57f0c15c884855b4a116c5" exitCode=0 Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.436576 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerDied","Data":"916c16f515c925491723ea2faf51ec0f63fa990e9f57f0c15c884855b4a116c5"} Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.605462 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.748785 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities\") pod \"6b94c451-a6b9-4649-a612-a39065b4e83c\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.749082 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zjw7\" (UniqueName: \"kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7\") pod \"6b94c451-a6b9-4649-a612-a39065b4e83c\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.749287 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content\") pod \"6b94c451-a6b9-4649-a612-a39065b4e83c\" (UID: \"6b94c451-a6b9-4649-a612-a39065b4e83c\") " Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.749988 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities" (OuterVolumeSpecName: "utilities") pod "6b94c451-a6b9-4649-a612-a39065b4e83c" (UID: "6b94c451-a6b9-4649-a612-a39065b4e83c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.751138 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.756584 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7" (OuterVolumeSpecName: "kube-api-access-2zjw7") pod "6b94c451-a6b9-4649-a612-a39065b4e83c" (UID: "6b94c451-a6b9-4649-a612-a39065b4e83c"). InnerVolumeSpecName "kube-api-access-2zjw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.793775 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b94c451-a6b9-4649-a612-a39065b4e83c" (UID: "6b94c451-a6b9-4649-a612-a39065b4e83c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.854056 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b94c451-a6b9-4649-a612-a39065b4e83c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:40 crc kubenswrapper[4847]: I0218 01:53:40.854092 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zjw7\" (UniqueName: \"kubernetes.io/projected/6b94c451-a6b9-4649-a612-a39065b4e83c-kube-api-access-2zjw7\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.445684 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw6fv" event={"ID":"6b94c451-a6b9-4649-a612-a39065b4e83c","Type":"ContainerDied","Data":"7844e7b374a3e73684bf73c46f79a399cf15d7575d1fa5073fe777a50d1e2921"} Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.445732 4847 scope.go:117] "RemoveContainer" containerID="916c16f515c925491723ea2faf51ec0f63fa990e9f57f0c15c884855b4a116c5" Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.445742 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw6fv" Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.475973 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.476136 4847 scope.go:117] "RemoveContainer" containerID="b52d45cde22006108e5e12e4180d97d7a8505837e056dfc5fd66d94a88340d97" Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.493748 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pw6fv"] Feb 18 01:53:41 crc kubenswrapper[4847]: I0218 01:53:41.517983 4847 scope.go:117] "RemoveContainer" containerID="d3b4c6221acb6113a865ae952e84e43dedb965d974a175c5cfe3a0bd34efd0c3" Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.252512 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.252724 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sccw" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="registry-server" containerID="cri-o://a073258b111e53d468d02e65adefbfdb9bf2b7e7d03a857dfc05635f59f27be9" gracePeriod=2 Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.458940 4847 generic.go:334] "Generic (PLEG): container finished" podID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerID="a073258b111e53d468d02e65adefbfdb9bf2b7e7d03a857dfc05635f59f27be9" exitCode=0 Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.459016 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerDied","Data":"a073258b111e53d468d02e65adefbfdb9bf2b7e7d03a857dfc05635f59f27be9"} Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.828094 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.998338 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content\") pod \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.998505 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdff5\" (UniqueName: \"kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5\") pod \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.998548 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities\") pod \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\" (UID: \"9737f87a-c79d-4f6f-9ab3-9b4772129b6e\") " Feb 18 01:53:42 crc kubenswrapper[4847]: I0218 01:53:42.999762 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities" (OuterVolumeSpecName: "utilities") pod "9737f87a-c79d-4f6f-9ab3-9b4772129b6e" (UID: "9737f87a-c79d-4f6f-9ab3-9b4772129b6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.005397 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5" (OuterVolumeSpecName: "kube-api-access-gdff5") pod "9737f87a-c79d-4f6f-9ab3-9b4772129b6e" (UID: "9737f87a-c79d-4f6f-9ab3-9b4772129b6e"). InnerVolumeSpecName "kube-api-access-gdff5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.056726 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9737f87a-c79d-4f6f-9ab3-9b4772129b6e" (UID: "9737f87a-c79d-4f6f-9ab3-9b4772129b6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.100913 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.100958 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.100972 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdff5\" (UniqueName: \"kubernetes.io/projected/9737f87a-c79d-4f6f-9ab3-9b4772129b6e-kube-api-access-gdff5\") on node \"crc\" DevicePath \"\"" Feb 18 01:53:43 crc kubenswrapper[4847]: E0218 01:53:43.407291 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.415129 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" path="/var/lib/kubelet/pods/6b94c451-a6b9-4649-a612-a39065b4e83c/volumes" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.469851 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sccw" event={"ID":"9737f87a-c79d-4f6f-9ab3-9b4772129b6e","Type":"ContainerDied","Data":"84022778fc12b04c231df134483a7ff88d80e6e6436248ae4a7b652507908e02"} Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.469906 4847 scope.go:117] "RemoveContainer" containerID="a073258b111e53d468d02e65adefbfdb9bf2b7e7d03a857dfc05635f59f27be9" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.469915 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sccw" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.491132 4847 scope.go:117] "RemoveContainer" containerID="e9b82726174252ef52a4e85852c4dae61bcc2cdbcd62bc4cb91c5323eab0cf4f" Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.493591 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.503423 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sccw"] Feb 18 01:53:43 crc kubenswrapper[4847]: I0218 01:53:43.514140 4847 scope.go:117] "RemoveContainer" containerID="03feefa14eb1ed184c950687b841b1141fbdc355c498528ef21f02c762bfa5d9" Feb 18 01:53:45 crc kubenswrapper[4847]: I0218 01:53:45.419516 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" path="/var/lib/kubelet/pods/9737f87a-c79d-4f6f-9ab3-9b4772129b6e/volumes" Feb 18 01:53:55 crc kubenswrapper[4847]: E0218 01:53:55.405691 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:53:55 crc kubenswrapper[4847]: E0218 01:53:55.405812 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:54:06 crc kubenswrapper[4847]: E0218 01:54:06.533422 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:54:06 crc kubenswrapper[4847]: E0218 01:54:06.534054 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:54:06 crc kubenswrapper[4847]: E0218 01:54:06.534518 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:54:06 crc kubenswrapper[4847]: E0218 01:54:06.535826 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:54:09 crc kubenswrapper[4847]: E0218 01:54:09.406142 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:54:18 crc kubenswrapper[4847]: E0218 01:54:18.405844 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:54:24 crc kubenswrapper[4847]: E0218 01:54:24.407400 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:54:30 crc kubenswrapper[4847]: E0218 01:54:30.407426 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:54:35 crc kubenswrapper[4847]: E0218 01:54:35.408653 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:54:38 crc kubenswrapper[4847]: I0218 01:54:38.603221 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_312a9ba8-6259-4db9-b9e3-9d6b7912c6ba/aodh-api/0.log" Feb 18 01:54:38 crc kubenswrapper[4847]: I0218 01:54:38.763485 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_312a9ba8-6259-4db9-b9e3-9d6b7912c6ba/aodh-notifier/0.log" Feb 18 01:54:38 crc kubenswrapper[4847]: I0218 01:54:38.791123 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_312a9ba8-6259-4db9-b9e3-9d6b7912c6ba/aodh-listener/0.log" Feb 18 01:54:38 crc kubenswrapper[4847]: I0218 01:54:38.817266 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_312a9ba8-6259-4db9-b9e3-9d6b7912c6ba/aodh-evaluator/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.699673 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6db955874-66wrk_594db61c-0bfb-44cf-be11-cae6758e9fac/barbican-api/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.704313 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6db955874-66wrk_594db61c-0bfb-44cf-be11-cae6758e9fac/barbican-api-log/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.760084 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64755b45d-nv688_8f21e33b-cde8-4278-927c-b9566864f208/barbican-keystone-listener/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.885704 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-64755b45d-nv688_8f21e33b-cde8-4278-927c-b9566864f208/barbican-keystone-listener-log/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.942684 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-ddd9775f-8wm5n_be03f8a3-0db4-45c7-90d9-6911a23b39c9/barbican-worker/0.log" Feb 18 01:54:39 crc kubenswrapper[4847]: I0218 01:54:39.947146 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-ddd9775f-8wm5n_be03f8a3-0db4-45c7-90d9-6911a23b39c9/barbican-worker-log/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.135258 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-6bfnl_7c608e56-c3b4-4a23-ac5e-2994862ffea6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.344409 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6/sg-core/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.373477 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6/ceilometer-notification-agent/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.392836 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6/proxy-httpd/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.513702 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hfqz8_e23c4cee-99f1-44ba-8070-565c0a433487/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.623095 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6f1d368a-d1df-4e38-b82a-7cd8911050cc/cinder-api/0.log" Feb 18 01:54:40 crc kubenswrapper[4847]: I0218 01:54:40.674456 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_6f1d368a-d1df-4e38-b82a-7cd8911050cc/cinder-api-log/0.log" Feb 18 01:54:41 crc kubenswrapper[4847]: E0218 01:54:41.416005 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:54:41 crc kubenswrapper[4847]: I0218 01:54:41.689443 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-l8b6z_595b7464-bb09-48f6-ae94-96bc8ed4cd16/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:41 crc kubenswrapper[4847]: I0218 01:54:41.765147 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4837a634-0109-4735-80ad-a9cf74966812/cinder-scheduler/0.log" Feb 18 01:54:41 crc kubenswrapper[4847]: I0218 01:54:41.770458 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4837a634-0109-4735-80ad-a9cf74966812/probe/0.log" Feb 18 01:54:41 crc kubenswrapper[4847]: I0218 01:54:41.913707 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6hhm7_d45f5c66-8268-498f-8c61-4c6c33cc1c28/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.014941 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cf7b6cbf7-zktfb_644fa6a1-3d08-4fad-a252-7f1364d0b56e/init/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.266558 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cf7b6cbf7-zktfb_644fa6a1-3d08-4fad-a252-7f1364d0b56e/init/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.324477 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5cf7b6cbf7-zktfb_644fa6a1-3d08-4fad-a252-7f1364d0b56e/dnsmasq-dns/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.517585 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-75d56c557b-p6pn6_1c3bf17f-a65b-4daf-82d0-f43dfa8c0f21/heat-api/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.864817 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-67b9f7bd8b-phnps_67a5eed6-fda8-4fca-bd98-6bcb2270d646/heat-engine/0.log" Feb 18 01:54:42 crc kubenswrapper[4847]: I0218 01:54:42.957109 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5fd77b47d6-ms5hf_724e605e-6796-4384-8832-ab9bcec6a585/heat-cfnapi/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.057001 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gdvd5_ecbd2804-8c74-4962-ad9c-48f261845f8c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.063213 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-df98v_2796ae55-da7e-484c-a3fa-789aabef230d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.274422 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-747f4858ff-m9tz2_5950d31e-b5dd-43e7-accb-570faedeb30a/keystone-api/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.278891 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522941-rppqb_55b8d659-c976-4095-baab-c6452d321fe2/keystone-cron/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.407592 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_f660a69e-33ac-40d0-93f8-68f496ea44f3/kube-state-metrics/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.470439 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fzz4f_d652d579-9c58-4f69-bdeb-d9ebc4a7ec9d/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.661022 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_551ec97c-df77-4223-abff-f7d7eb766736/mysqld-exporter/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.780752 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bf6d8bf75-gfz9n_660f92ea-ca1f-410b-b9f2-d42b2343e1d3/neutron-api/0.log" Feb 18 01:54:43 crc kubenswrapper[4847]: I0218 01:54:43.892243 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bf6d8bf75-gfz9n_660f92ea-ca1f-410b-b9f2-d42b2343e1d3/neutron-httpd/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.177937 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6fc0b03b-36f3-47d5-bdce-65a09774bf93/nova-api-log/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.221911 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4ca6c74e-5f00-416d-aa49-5132671a351a/nova-cell0-conductor-conductor/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.504859 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6fc0b03b-36f3-47d5-bdce-65a09774bf93/nova-api-api/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.514015 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_9e330065-0783-4200-8af0-e726b820aa6d/nova-cell1-conductor-conductor/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.762747 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_66c0dd63-bd8e-44ce-bd9a-edd421f59682/nova-cell1-novncproxy-novncproxy/0.log" Feb 18 01:54:44 crc kubenswrapper[4847]: I0218 01:54:44.890006 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d7d31ecb-9f5f-42bf-be6a-9e97c594247a/nova-metadata-log/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.055678 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_38e79629-56ea-4262-875c-8dd1efdbd88f/nova-scheduler-scheduler/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.200893 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_109c4d3d-c276-45ed-93d2-d1414e156fb9/mysql-bootstrap/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.581643 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_109c4d3d-c276-45ed-93d2-d1414e156fb9/mysql-bootstrap/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.679583 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_109c4d3d-c276-45ed-93d2-d1414e156fb9/galera/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.777758 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72/mysql-bootstrap/0.log" Feb 18 01:54:45 crc kubenswrapper[4847]: I0218 01:54:45.945840 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72/mysql-bootstrap/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.006623 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_5c4503c2-3a56-4dfb-9d53-8ecc2a3f0c72/galera/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.151327 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4f55a480-6f28-47f9-aa62-f21de18ff60e/openstackclient/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.215940 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-t5ftr_a9267970-665c-43c5-be4c-1cd26b39ad2d/openstack-network-exporter/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.461688 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5k8p_b233fc1e-4730-4c0c-bf0d-741bf86d3a19/ovsdb-server-init/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.503522 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d7d31ecb-9f5f-42bf-be6a-9e97c594247a/nova-metadata-metadata/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.653697 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5k8p_b233fc1e-4730-4c0c-bf0d-741bf86d3a19/ovsdb-server-init/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.655978 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5k8p_b233fc1e-4730-4c0c-bf0d-741bf86d3a19/ovs-vswitchd/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.671732 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h5k8p_b233fc1e-4730-4c0c-bf0d-741bf86d3a19/ovsdb-server/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.851480 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xh6ft_2801a17e-6108-4ffe-9eac-7068b93707e1/ovn-controller/0.log" Feb 18 01:54:46 crc kubenswrapper[4847]: I0218 01:54:46.875804 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-d9z9z_0429dd21-328a-4aed-9e67-f008635b6127/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.120053 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fcef7123-9c18-4431-b436-e6c6e6881f5a/ovn-northd/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.136854 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fcef7123-9c18-4431-b436-e6c6e6881f5a/openstack-network-exporter/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.288677 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b50a5134-eeac-410c-8f07-b6a4c141386e/openstack-network-exporter/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.347810 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_b50a5134-eeac-410c-8f07-b6a4c141386e/ovsdbserver-nb/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.348518 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cbdd48eb-2162-4fc5-9d56-3e58835ac6bc/openstack-network-exporter/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.497747 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cbdd48eb-2162-4fc5-9d56-3e58835ac6bc/ovsdbserver-sb/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.648119 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f4b564c84-4zd7z_31f442fe-cea0-4d0f-a39d-75b8648fbc3d/placement-api/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.720297 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-f4b564c84-4zd7z_31f442fe-cea0-4d0f-a39d-75b8648fbc3d/placement-log/0.log" Feb 18 01:54:47 crc kubenswrapper[4847]: I0218 01:54:47.821332 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f622e85f-b79e-4abb-aa5d-bb51ca59d1ae/init-config-reloader/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.258238 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f622e85f-b79e-4abb-aa5d-bb51ca59d1ae/thanos-sidecar/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.279311 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f622e85f-b79e-4abb-aa5d-bb51ca59d1ae/init-config-reloader/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.283809 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f622e85f-b79e-4abb-aa5d-bb51ca59d1ae/config-reloader/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.288050 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_f622e85f-b79e-4abb-aa5d-bb51ca59d1ae/prometheus/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.506924 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fac01d88-c41a-44cd-97e2-34d58a619ba1/setup-container/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.678791 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fac01d88-c41a-44cd-97e2-34d58a619ba1/setup-container/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.692948 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_fac01d88-c41a-44cd-97e2-34d58a619ba1/rabbitmq/0.log" Feb 18 01:54:49 crc kubenswrapper[4847]: I0218 01:54:49.809415 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b19ac705-a85b-44ee-86c9-c31b23d988c0/setup-container/0.log" Feb 18 01:54:50 crc kubenswrapper[4847]: I0218 01:54:50.003849 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b19ac705-a85b-44ee-86c9-c31b23d988c0/setup-container/0.log" Feb 18 01:54:50 crc kubenswrapper[4847]: I0218 01:54:50.088951 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tw6tm_f33724ce-fdec-4a31-8d15-f39244f2392e/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:50 crc kubenswrapper[4847]: I0218 01:54:50.098788 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b19ac705-a85b-44ee-86c9-c31b23d988c0/rabbitmq/0.log" Feb 18 01:54:50 crc kubenswrapper[4847]: E0218 01:54:50.406325 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.155933 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7wptt_2a653876-94ca-4328-825b-abca7b86ea33/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.271474 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8bqxp_3d946c96-5bd1-4a59-b58c-eedf4b3bc460/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.531777 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jtqrj_6c2669b9-bb6f-484b-9a1b-70c6903244c5/ssh-known-hosts-edpm-deployment/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.722505 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7df4cf8969-f69sk_30cfe0d1-2602-42ae-b1b3-3f4e562c13c6/proxy-server/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.778123 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7df4cf8969-f69sk_30cfe0d1-2602-42ae-b1b3-3f4e562c13c6/proxy-httpd/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.817626 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-8rvhw_863d851c-3284-47db-8c80-d5d10f8c2b5c/swift-ring-rebalance/0.log" Feb 18 01:54:51 crc kubenswrapper[4847]: I0218 01:54:51.985106 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/account-auditor/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.032515 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/account-reaper/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.054826 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/account-replicator/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.201586 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/account-server/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.482462 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/container-auditor/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.519316 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/container-replicator/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.535309 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/container-server/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.588917 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/container-updater/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.725815 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/object-auditor/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.733520 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/object-expirer/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.784269 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/object-replicator/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.803759 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/object-server/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.937777 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/rsync/0.log" Feb 18 01:54:52 crc kubenswrapper[4847]: I0218 01:54:52.965873 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/object-updater/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.002062 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_623045fa-a3f1-4ad5-a5f7-361f31303bfb/swift-recon-cron/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.166846 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4rt5z_eaf6af26-8056-47b5-9732-a0fc0f4680d6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.245740 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-75hsd_27e2b216-c7e0-48cb-8fbe-1b286c6ca6c9/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.404759 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-h68tz_e6120fb7-f119-4597-86d5-8c75dcffac32/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.566352 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-h7hd4_9781efbc-c7ea-4f51-9fd4-2c1ed023e5b0/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.686741 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mzrks_a8cdefc7-b3d5-4ef5-a08b-611fed8486b1/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.850713 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-pph69_8b3ece2f-0bb7-4404-b500-5da0aa7aea40/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:53 crc kubenswrapper[4847]: I0218 01:54:53.956828 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s7zz9_4b7fee7e-01b8-4a92-b928-c9bc9b0d9df4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:54 crc kubenswrapper[4847]: I0218 01:54:54.059146 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ffl2s_6c781aeb-a3ac-4a08-a055-ed2846466b8b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 18 01:54:55 crc kubenswrapper[4847]: I0218 01:54:55.773327 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_085dadd3-8aae-4c94-84e4-6289f1e537e1/memcached/0.log" Feb 18 01:54:56 crc kubenswrapper[4847]: E0218 01:54:56.406680 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:55:03 crc kubenswrapper[4847]: E0218 01:55:03.406441 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:55:07 crc kubenswrapper[4847]: E0218 01:55:07.416715 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:55:18 crc kubenswrapper[4847]: E0218 01:55:18.407143 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:55:22 crc kubenswrapper[4847]: E0218 01:55:22.407148 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:55:23 crc kubenswrapper[4847]: I0218 01:55:23.713812 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/util/0.log" Feb 18 01:55:23 crc kubenswrapper[4847]: I0218 01:55:23.899280 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/util/0.log" Feb 18 01:55:23 crc kubenswrapper[4847]: I0218 01:55:23.904418 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/pull/0.log" Feb 18 01:55:23 crc kubenswrapper[4847]: I0218 01:55:23.936174 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/pull/0.log" Feb 18 01:55:24 crc kubenswrapper[4847]: I0218 01:55:24.106371 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/util/0.log" Feb 18 01:55:24 crc kubenswrapper[4847]: I0218 01:55:24.115732 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/extract/0.log" Feb 18 01:55:24 crc kubenswrapper[4847]: I0218 01:55:24.120158 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b1481b606e71b864a5cf7f688d735a19fa7280653d541c1e9bc6110ecbw4tg5_6453624a-f0d1-4831-a2ca-749f87f88542/pull/0.log" Feb 18 01:55:24 crc kubenswrapper[4847]: I0218 01:55:24.557093 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-bflmk_6ccdc5e8-7582-4ea6-89f7-b30e7c96ba33/manager/0.log" Feb 18 01:55:24 crc kubenswrapper[4847]: I0218 01:55:24.799935 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7vvgv_0e56e3bc-f5fe-4d91-9cb4-bc22b59fd9eb/manager/0.log" Feb 18 01:55:25 crc kubenswrapper[4847]: I0218 01:55:25.246580 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xnrms_a93522f3-c6ff-46fb-ab96-0af205914e2f/manager/0.log" Feb 18 01:55:25 crc kubenswrapper[4847]: I0218 01:55:25.289063 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-t5sg8_a7705c91-5ed6-4a64-b9a1-06af4d223613/manager/0.log" Feb 18 01:55:26 crc kubenswrapper[4847]: I0218 01:55:26.021645 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-4g2zb_22395d35-6b40-4f53-b3ca-dced6abd4b13/manager/0.log" Feb 18 01:55:26 crc kubenswrapper[4847]: I0218 01:55:26.451501 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-tzkvx_bfdfed12-2cd6-4adc-b953-83d17460c270/manager/0.log" Feb 18 01:55:26 crc kubenswrapper[4847]: I0218 01:55:26.781434 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-68zsz_20119aa4-b1ef-4ac7-9b93-af64593b22b3/manager/0.log" Feb 18 01:55:26 crc kubenswrapper[4847]: I0218 01:55:26.945891 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-9b7bk_eb9dda88-61d8-471e-8f59-1f6918e048d0/manager/0.log" Feb 18 01:55:27 crc kubenswrapper[4847]: I0218 01:55:27.183058 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-8gg4t_a36027cb-b3fc-45b7-bcef-75e9b7743594/manager/0.log" Feb 18 01:55:27 crc kubenswrapper[4847]: I0218 01:55:27.202044 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-njjgx_5080d582-df48-411d-ae00-57bb214b3fb1/manager/0.log" Feb 18 01:55:27 crc kubenswrapper[4847]: I0218 01:55:27.316960 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-zft7w_67706a3a-2985-42f6-9820-21bf9abc77fc/manager/0.log" Feb 18 01:55:28 crc kubenswrapper[4847]: I0218 01:55:28.264174 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-l2sl6_82cf79bd-1bb2-4c3d-81e5-123ba2cfae5e/manager/0.log" Feb 18 01:55:28 crc kubenswrapper[4847]: I0218 01:55:28.386987 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cttqwl_96061780-bc78-49b0-b23d-2118927130c4/manager/0.log" Feb 18 01:55:28 crc kubenswrapper[4847]: I0218 01:55:28.705612 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-65cd6ddc4f-mqptb_5c8009fe-0ea5-4bd5-a152-73dff9f00145/operator/0.log" Feb 18 01:55:28 crc kubenswrapper[4847]: I0218 01:55:28.909625 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-s6d6x_57755abc-d7e9-479b-812c-6ddacee7d1be/registry-server/0.log" Feb 18 01:55:29 crc kubenswrapper[4847]: I0218 01:55:29.210660 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-cpzb6_c63bde24-5850-4ef7-abba-00b22064d1c7/manager/0.log" Feb 18 01:55:29 crc kubenswrapper[4847]: I0218 01:55:29.448376 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-q7phq_eef8e54a-fdcd-4e1a-a56f-e2b8b4627c02/manager/0.log" Feb 18 01:55:29 crc kubenswrapper[4847]: I0218 01:55:29.833486 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z9kpc_594a9f71-f227-40eb-89ab-a9f661a63e3a/operator/0.log" Feb 18 01:55:30 crc kubenswrapper[4847]: I0218 01:55:30.079836 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-fj256_8fcd7de2-15f8-4d01-8535-296fb3d8de65/manager/0.log" Feb 18 01:55:30 crc kubenswrapper[4847]: I0218 01:55:30.537059 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-867dw_5aee4f12-aa12-4168-bc5b-ad6408c5e8d8/manager/0.log" Feb 18 01:55:30 crc kubenswrapper[4847]: I0218 01:55:30.866263 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-77b97c6f8f-pcgng_2726117a-e40a-4a65-b290-404c27c71101/manager/0.log" Feb 18 01:55:30 crc kubenswrapper[4847]: I0218 01:55:30.942153 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-xttb8_5cb3848f-23f4-4037-876f-e390daafc3ba/manager/0.log" Feb 18 01:55:31 crc kubenswrapper[4847]: I0218 01:55:31.029877 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6994859df4-mcksc_6bb1820a-9449-4f74-8523-ee747951291d/manager/0.log" Feb 18 01:55:31 crc kubenswrapper[4847]: I0218 01:55:31.397704 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-4x7fq_082203f6-e5fd-4dd3-8b94-2a46247155d9/manager/0.log" Feb 18 01:55:31 crc kubenswrapper[4847]: E0218 01:55:31.406649 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:55:35 crc kubenswrapper[4847]: E0218 01:55:35.405623 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:55:36 crc kubenswrapper[4847]: I0218 01:55:36.704820 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-k8v2d_c0bb6956-fedb-40ce-9d87-3fa43b468103/manager/0.log" Feb 18 01:55:44 crc kubenswrapper[4847]: E0218 01:55:44.408008 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:55:48 crc kubenswrapper[4847]: E0218 01:55:48.407673 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:55:53 crc kubenswrapper[4847]: I0218 01:55:53.492353 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:55:53 crc kubenswrapper[4847]: I0218 01:55:53.493015 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:55:54 crc kubenswrapper[4847]: I0218 01:55:54.843864 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qtzlv_4e454a89-9fab-4b19-9a33-7089da87f5a0/control-plane-machine-set-operator/0.log" Feb 18 01:55:55 crc kubenswrapper[4847]: I0218 01:55:55.062427 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pz8zw_44555695-834e-4ffc-bee2-b16d7adf6fbc/kube-rbac-proxy/0.log" Feb 18 01:55:55 crc kubenswrapper[4847]: I0218 01:55:55.101107 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pz8zw_44555695-834e-4ffc-bee2-b16d7adf6fbc/machine-api-operator/0.log" Feb 18 01:55:59 crc kubenswrapper[4847]: E0218 01:55:59.407911 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:56:01 crc kubenswrapper[4847]: E0218 01:56:01.406187 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:56:11 crc kubenswrapper[4847]: I0218 01:56:11.346879 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7tjhn_7c1b21d7-11d3-4f97-aee8-d17dbeec7dbd/cert-manager-controller/0.log" Feb 18 01:56:11 crc kubenswrapper[4847]: I0218 01:56:11.508373 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wrhbw_7d40c331-d27a-4d9f-910d-3c11700f264b/cert-manager-webhook/0.log" Feb 18 01:56:11 crc kubenswrapper[4847]: I0218 01:56:11.508851 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-7gsvp_3280aa1e-4dd8-438a-81c4-a07a1b7080db/cert-manager-cainjector/0.log" Feb 18 01:56:13 crc kubenswrapper[4847]: E0218 01:56:13.427791 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:56:15 crc kubenswrapper[4847]: E0218 01:56:15.411185 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:56:23 crc kubenswrapper[4847]: I0218 01:56:23.492049 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:56:23 crc kubenswrapper[4847]: I0218 01:56:23.493721 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:56:24 crc kubenswrapper[4847]: E0218 01:56:24.407297 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:56:26 crc kubenswrapper[4847]: E0218 01:56:26.406307 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:56:27 crc kubenswrapper[4847]: I0218 01:56:27.895113 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-vjmwl_aeb7db89-c5aa-4675-aa0c-f4f6a34b109b/nmstate-console-plugin/0.log" Feb 18 01:56:27 crc kubenswrapper[4847]: I0218 01:56:27.943113 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sffsn_818015c7-8c32-4aff-9723-67548354380b/nmstate-handler/0.log" Feb 18 01:56:28 crc kubenswrapper[4847]: I0218 01:56:28.096607 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c6rqp_236a285b-dac4-49c1-9cf1-f76b5b0f6a79/kube-rbac-proxy/0.log" Feb 18 01:56:28 crc kubenswrapper[4847]: I0218 01:56:28.134361 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-c6rqp_236a285b-dac4-49c1-9cf1-f76b5b0f6a79/nmstate-metrics/0.log" Feb 18 01:56:28 crc kubenswrapper[4847]: I0218 01:56:28.241200 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-p7scf_985d9311-59df-4b29-9d4c-0103f801ed1c/nmstate-operator/0.log" Feb 18 01:56:28 crc kubenswrapper[4847]: I0218 01:56:28.312588 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-k57ql_637e4133-8cdb-4098-bd6a-55cb7ce569b4/nmstate-webhook/0.log" Feb 18 01:56:35 crc kubenswrapper[4847]: E0218 01:56:35.407356 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:56:40 crc kubenswrapper[4847]: E0218 01:56:40.407503 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:56:42 crc kubenswrapper[4847]: I0218 01:56:42.146700 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f64cb577-8nrqk_c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe/kube-rbac-proxy/0.log" Feb 18 01:56:42 crc kubenswrapper[4847]: I0218 01:56:42.181955 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f64cb577-8nrqk_c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe/manager/0.log" Feb 18 01:56:47 crc kubenswrapper[4847]: E0218 01:56:47.422222 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:56:53 crc kubenswrapper[4847]: I0218 01:56:53.492075 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:56:53 crc kubenswrapper[4847]: I0218 01:56:53.492681 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:56:53 crc kubenswrapper[4847]: I0218 01:56:53.492738 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:56:53 crc kubenswrapper[4847]: I0218 01:56:53.493775 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:56:53 crc kubenswrapper[4847]: I0218 01:56:53.493962 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c" gracePeriod=600 Feb 18 01:56:54 crc kubenswrapper[4847]: E0218 01:56:54.408045 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:56:54 crc kubenswrapper[4847]: I0218 01:56:54.575539 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c" exitCode=0 Feb 18 01:56:54 crc kubenswrapper[4847]: I0218 01:56:54.575889 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c"} Feb 18 01:56:54 crc kubenswrapper[4847]: I0218 01:56:54.575912 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc"} Feb 18 01:56:54 crc kubenswrapper[4847]: I0218 01:56:54.575926 4847 scope.go:117] "RemoveContainer" containerID="90323a36a4546c7d23bcd143d982401388141c8bb77b3db814b3bf9d5a01ed09" Feb 18 01:56:58 crc kubenswrapper[4847]: I0218 01:56:58.594193 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jgmd_6d5be12f-bed3-4a23-aa85-f0a08a5fc046/prometheus-operator/0.log" Feb 18 01:56:58 crc kubenswrapper[4847]: I0218 01:56:58.790180 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c_c31e5d6e-6fa4-4dfb-bbef-70effd832c70/prometheus-operator-admission-webhook/0.log" Feb 18 01:56:58 crc kubenswrapper[4847]: I0218 01:56:58.839392 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s_9a167167-ef99-4088-bf22-f10acba5f1c1/prometheus-operator-admission-webhook/0.log" Feb 18 01:56:58 crc kubenswrapper[4847]: I0218 01:56:58.976127 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kxn6x_f781a655-8f6a-4fe4-a3e8-306cd263c8f8/operator/0.log" Feb 18 01:56:59 crc kubenswrapper[4847]: I0218 01:56:59.040935 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-25tvl_8095b217-447f-4789-8ef4-fa117075737c/observability-ui-dashboards/0.log" Feb 18 01:56:59 crc kubenswrapper[4847]: I0218 01:56:59.176152 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jx28n_90db687a-cb80-4d17-848c-f4a28348db36/perses-operator/0.log" Feb 18 01:57:00 crc kubenswrapper[4847]: E0218 01:57:00.407067 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:57:09 crc kubenswrapper[4847]: E0218 01:57:09.413703 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:57:13 crc kubenswrapper[4847]: E0218 01:57:13.406926 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:57:15 crc kubenswrapper[4847]: I0218 01:57:15.929193 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-9ht6n_3833d12b-09b8-4c7c-8f7b-a5d7eec27940/cluster-logging-operator/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.067159 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-m899v_7f274356-4622-4bbe-ad54-196514afaa20/collector/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.170651 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_b1105da5-f79a-4638-a2cd-9e9219b02682/loki-compactor/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.281745 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-x76k8_9e574b70-d0ce-48ba-8a6b-f7b3cbd3d843/loki-distributor/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.352678 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9c654d8fb-r2v6d_777cf1df-2302-473d-87b1-893df3304f21/gateway/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.378035 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9c654d8fb-r2v6d_777cf1df-2302-473d-87b1-893df3304f21/opa/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.454698 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9c654d8fb-tcxtw_aebf8b18-099f-4bfe-88ce-a34461bb4b51/gateway/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.551217 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-9c654d8fb-tcxtw_aebf8b18-099f-4bfe-88ce-a34461bb4b51/opa/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.629319 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_29b3aa92-5b12-457c-b25a-27aa73aa8c37/loki-index-gateway/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.758010 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_6660c016-1faa-43e2-904c-3e8db37f6b3d/loki-ingester/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.788811 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-wnr8f_ec7e43fc-d7e7-4bb1-a7cc-a62d2be0a753/loki-querier/0.log" Feb 18 01:57:16 crc kubenswrapper[4847]: I0218 01:57:16.924945 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-wsvv2_810922d4-8577-496f-ad3a-a49c2122d91d/loki-query-frontend/0.log" Feb 18 01:57:23 crc kubenswrapper[4847]: E0218 01:57:23.408067 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:57:24 crc kubenswrapper[4847]: E0218 01:57:24.405584 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.166178 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-ppjbv_02e790ed-2120-428f-9015-81031198b2ae/kube-rbac-proxy/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.363300 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-ppjbv_02e790ed-2120-428f-9015-81031198b2ae/controller/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.365899 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-frr-files/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.487989 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-frr-files/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.524797 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-reloader/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.556891 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-reloader/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.557414 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-metrics/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.732011 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-reloader/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.735418 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-frr-files/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.745886 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-metrics/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.814734 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-metrics/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.931539 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-frr-files/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.955731 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-metrics/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.958581 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/cp-reloader/0.log" Feb 18 01:57:32 crc kubenswrapper[4847]: I0218 01:57:32.969108 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/controller/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.126185 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/kube-rbac-proxy/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.172730 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/frr-metrics/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.177049 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/kube-rbac-proxy-frr/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.361519 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/reloader/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.398676 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-nzx76_1734f7d8-892a-4a2b-8e64-224d75324d06/frr-k8s-webhook-server/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.594988 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5b8b64c6dc-l56n2_817d642e-8dbe-4edb-81b7-21e3b47751bb/manager/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.816510 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7c8b4689bf-5lg4r_2ee3c157-6f35-403f-a563-00c85ea7cdbf/webhook-server/0.log" Feb 18 01:57:33 crc kubenswrapper[4847]: I0218 01:57:33.854767 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-45fx5_0cccb9a0-0f8c-44b0-9d0e-e31bcf146024/kube-rbac-proxy/0.log" Feb 18 01:57:34 crc kubenswrapper[4847]: I0218 01:57:34.441235 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-45fx5_0cccb9a0-0f8c-44b0-9d0e-e31bcf146024/speaker/0.log" Feb 18 01:57:34 crc kubenswrapper[4847]: I0218 01:57:34.748532 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m56k2_d5737c80-5d5b-4e38-8826-620411606e6a/frr/0.log" Feb 18 01:57:35 crc kubenswrapper[4847]: E0218 01:57:35.407697 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:57:39 crc kubenswrapper[4847]: E0218 01:57:39.418032 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:57:48 crc kubenswrapper[4847]: E0218 01:57:48.406800 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:57:48 crc kubenswrapper[4847]: I0218 01:57:48.822120 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/util/0.log" Feb 18 01:57:48 crc kubenswrapper[4847]: I0218 01:57:48.989799 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/util/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.018947 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.067442 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.212839 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/util/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.213774 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/extract/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.241755 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19nx9xk_f0378fa3-c8b4-43a3-bf6e-14a9066f1fcb/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.345673 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/util/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.516361 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.588756 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/util/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.589871 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.737829 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/util/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.762539 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/extract/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.776828 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08df954_52b44016-fa7b-4c2a-8071-d4406928c47b/pull/0.log" Feb 18 01:57:49 crc kubenswrapper[4847]: I0218 01:57:49.880738 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/util/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.064436 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/util/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.076145 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/pull/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.095024 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/pull/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.241538 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/pull/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.261623 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/extract/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.263284 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213bghx2_a04b076e-790c-44cc-8aab-b77901dceadb/util/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.547020 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-utilities/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.738637 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-utilities/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.764797 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-content/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.767037 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-content/0.log" Feb 18 01:57:50 crc kubenswrapper[4847]: I0218 01:57:50.957797 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-content/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.000084 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/extract-utilities/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.147502 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-utilities/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.472796 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-content/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.489396 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-utilities/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.666916 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-content/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.684322 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-content/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.695246 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/extract-utilities/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.848227 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sfbrd_aff962e0-6ef9-4a38-86ae-10c0a136da45/registry-server/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.971583 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/util/0.log" Feb 18 01:57:51 crc kubenswrapper[4847]: I0218 01:57:51.999406 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jgtdv_41aa5b5e-b48f-4cee-8f37-6f0229e3766a/registry-server/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.142347 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/util/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.183968 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.189038 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.338071 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/util/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.370775 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.378789 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989hjppp_261e46ac-b43f-490f-bdbe-8181cbecdf0d/extract/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.509849 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/util/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.682789 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/util/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.701542 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.714200 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.881667 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/extract/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.897304 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/pull/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.905560 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqzmqd_b0059114-96c2-4ba4-9d6f-310d7e0a9372/util/0.log" Feb 18 01:57:52 crc kubenswrapper[4847]: I0218 01:57:52.926174 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4dxcv_a3803e77-d427-4d42-9e2e-c8fa87bca4d8/marketplace-operator/0.log" Feb 18 01:57:53 crc kubenswrapper[4847]: I0218 01:57:53.800661 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-utilities/0.log" Feb 18 01:57:53 crc kubenswrapper[4847]: I0218 01:57:53.933989 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-utilities/0.log" Feb 18 01:57:53 crc kubenswrapper[4847]: I0218 01:57:53.946371 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-content/0.log" Feb 18 01:57:53 crc kubenswrapper[4847]: I0218 01:57:53.974793 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-content/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.158497 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-content/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.170100 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/extract-utilities/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.200427 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-utilities/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.362695 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mwtpz_2edc1248-18dd-42c6-878e-c3e073b33aaa/registry-server/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.381795 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-utilities/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: E0218 01:57:54.406056 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.422655 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-content/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.422903 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-content/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.581041 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-content/0.log" Feb 18 01:57:54 crc kubenswrapper[4847]: I0218 01:57:54.591226 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/extract-utilities/0.log" Feb 18 01:57:55 crc kubenswrapper[4847]: I0218 01:57:55.327161 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jjhq5_b3a5f225-da8f-4b7c-a346-2926b83b1d0f/registry-server/0.log" Feb 18 01:57:59 crc kubenswrapper[4847]: E0218 01:57:59.408778 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:58:07 crc kubenswrapper[4847]: E0218 01:58:07.406217 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:58:09 crc kubenswrapper[4847]: I0218 01:58:09.737496 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fdcccb7c9-lvv8c_c31e5d6e-6fa4-4dfb-bbef-70effd832c70/prometheus-operator-admission-webhook/0.log" Feb 18 01:58:09 crc kubenswrapper[4847]: I0218 01:58:09.786861 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-9jgmd_6d5be12f-bed3-4a23-aa85-f0a08a5fc046/prometheus-operator/0.log" Feb 18 01:58:09 crc kubenswrapper[4847]: I0218 01:58:09.822703 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fdcccb7c9-pwm9s_9a167167-ef99-4088-bf22-f10acba5f1c1/prometheus-operator-admission-webhook/0.log" Feb 18 01:58:09 crc kubenswrapper[4847]: I0218 01:58:09.939285 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kxn6x_f781a655-8f6a-4fe4-a3e8-306cd263c8f8/operator/0.log" Feb 18 01:58:09 crc kubenswrapper[4847]: I0218 01:58:09.956324 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-25tvl_8095b217-447f-4789-8ef4-fa117075737c/observability-ui-dashboards/0.log" Feb 18 01:58:10 crc kubenswrapper[4847]: I0218 01:58:10.001197 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-jx28n_90db687a-cb80-4d17-848c-f4a28348db36/perses-operator/0.log" Feb 18 01:58:13 crc kubenswrapper[4847]: E0218 01:58:13.406716 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:58:20 crc kubenswrapper[4847]: E0218 01:58:20.407101 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:58:24 crc kubenswrapper[4847]: I0218 01:58:24.479708 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f64cb577-8nrqk_c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe/kube-rbac-proxy/0.log" Feb 18 01:58:24 crc kubenswrapper[4847]: I0218 01:58:24.560311 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-6f64cb577-8nrqk_c2add3d4-53cf-4cdb-a8c1-a37e0a0776fe/manager/0.log" Feb 18 01:58:25 crc kubenswrapper[4847]: E0218 01:58:25.406754 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:58:35 crc kubenswrapper[4847]: I0218 01:58:35.406152 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 01:58:35 crc kubenswrapper[4847]: E0218 01:58:35.557172 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:58:35 crc kubenswrapper[4847]: E0218 01:58:35.557241 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 01:58:35 crc kubenswrapper[4847]: E0218 01:58:35.557390 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:58:35 crc kubenswrapper[4847]: E0218 01:58:35.558584 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:58:37 crc kubenswrapper[4847]: E0218 01:58:37.414902 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:58:46 crc kubenswrapper[4847]: E0218 01:58:46.136878 4847 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.80:60758->38.102.83.80:41687: write tcp 38.102.83.80:60758->38.102.83.80:41687: write: broken pipe Feb 18 01:58:46 crc kubenswrapper[4847]: E0218 01:58:46.408117 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:58:52 crc kubenswrapper[4847]: E0218 01:58:52.409797 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:58:53 crc kubenswrapper[4847]: I0218 01:58:53.514541 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:58:53 crc kubenswrapper[4847]: I0218 01:58:53.515015 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:59:01 crc kubenswrapper[4847]: E0218 01:59:01.407407 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:59:04 crc kubenswrapper[4847]: E0218 01:59:04.408092 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:59:12 crc kubenswrapper[4847]: E0218 01:59:12.407569 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:59:18 crc kubenswrapper[4847]: E0218 01:59:18.517730 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:59:18 crc kubenswrapper[4847]: E0218 01:59:18.518310 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 01:59:18 crc kubenswrapper[4847]: E0218 01:59:18.518512 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 01:59:18 crc kubenswrapper[4847]: E0218 01:59:18.519818 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:59:23 crc kubenswrapper[4847]: E0218 01:59:23.408019 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:59:23 crc kubenswrapper[4847]: I0218 01:59:23.491877 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:59:23 crc kubenswrapper[4847]: I0218 01:59:23.491964 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:59:33 crc kubenswrapper[4847]: E0218 01:59:33.408651 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:59:35 crc kubenswrapper[4847]: E0218 01:59:35.407452 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:59:36 crc kubenswrapper[4847]: I0218 01:59:36.618756 4847 scope.go:117] "RemoveContainer" containerID="37b70171e2fc0992e133544869ab526ad90326e2f51615efe80c8b597a40eec7" Feb 18 01:59:44 crc kubenswrapper[4847]: E0218 01:59:44.406414 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 01:59:47 crc kubenswrapper[4847]: E0218 01:59:47.415445 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.492270 4847 patch_prober.go:28] interesting pod/machine-config-daemon-xsj47 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.493268 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.493327 4847 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.494996 4847 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc"} pod="openshift-machine-config-operator/machine-config-daemon-xsj47" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.495056 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerName="machine-config-daemon" containerID="cri-o://3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" gracePeriod=600 Feb 18 01:59:53 crc kubenswrapper[4847]: E0218 01:59:53.625590 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.969494 4847 generic.go:334] "Generic (PLEG): container finished" podID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" exitCode=0 Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.969534 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerDied","Data":"3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc"} Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.969626 4847 scope.go:117] "RemoveContainer" containerID="440c5b942d84801d3391d67ccb7bd978f4d142c7b0d272a754c51245ebf9c23c" Feb 18 01:59:53 crc kubenswrapper[4847]: I0218 01:59:53.970523 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 01:59:53 crc kubenswrapper[4847]: E0218 01:59:53.970994 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 01:59:59 crc kubenswrapper[4847]: E0218 01:59:59.415270 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.152098 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q"] Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.153120 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.154995 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.155127 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.155207 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.155365 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.155451 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.155544 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.155640 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.155739 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.155811 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="extract-content" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.155901 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba90b9d-26a6-4184-b27b-303825add8a9" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.155973 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba90b9d-26a6-4184-b27b-303825add8a9" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4847]: E0218 02:00:00.156066 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.156139 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="extract-utilities" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.156652 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba90b9d-26a6-4184-b27b-303825add8a9" containerName="container-00" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.156762 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9737f87a-c79d-4f6f-9ab3-9b4772129b6e" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.156863 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b94c451-a6b9-4649-a612-a39065b4e83c" containerName="registry-server" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.157987 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.162916 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q"] Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.164032 4847 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.164723 4847 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.294860 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.295221 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.295270 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4ckz\" (UniqueName: \"kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.397512 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.397647 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.397738 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4ckz\" (UniqueName: \"kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.399294 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.403812 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.423486 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4ckz\" (UniqueName: \"kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz\") pod \"collect-profiles-29523000-qzt2q\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:00 crc kubenswrapper[4847]: I0218 02:00:00.483360 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:01 crc kubenswrapper[4847]: I0218 02:00:01.031734 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q"] Feb 18 02:00:01 crc kubenswrapper[4847]: I0218 02:00:01.052627 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" event={"ID":"9965de9d-d4b9-4054-bde3-39baf7994db8","Type":"ContainerStarted","Data":"8ebfc7b0bd6965283d0e34ae8c8813e87f6ea5cd536c4605a9f5b82ed656c4c4"} Feb 18 02:00:01 crc kubenswrapper[4847]: E0218 02:00:01.417577 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:00:02 crc kubenswrapper[4847]: I0218 02:00:02.071139 4847 generic.go:334] "Generic (PLEG): container finished" podID="9965de9d-d4b9-4054-bde3-39baf7994db8" containerID="2e779d572ba23fb035810b332e5374d11216d1f2a4bd6f2339eac71f5ac1965a" exitCode=0 Feb 18 02:00:02 crc kubenswrapper[4847]: I0218 02:00:02.071181 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" event={"ID":"9965de9d-d4b9-4054-bde3-39baf7994db8","Type":"ContainerDied","Data":"2e779d572ba23fb035810b332e5374d11216d1f2a4bd6f2339eac71f5ac1965a"} Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.496002 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.676359 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume\") pod \"9965de9d-d4b9-4054-bde3-39baf7994db8\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.676481 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4ckz\" (UniqueName: \"kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz\") pod \"9965de9d-d4b9-4054-bde3-39baf7994db8\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.676590 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume\") pod \"9965de9d-d4b9-4054-bde3-39baf7994db8\" (UID: \"9965de9d-d4b9-4054-bde3-39baf7994db8\") " Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.677648 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume" (OuterVolumeSpecName: "config-volume") pod "9965de9d-d4b9-4054-bde3-39baf7994db8" (UID: "9965de9d-d4b9-4054-bde3-39baf7994db8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.693056 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz" (OuterVolumeSpecName: "kube-api-access-m4ckz") pod "9965de9d-d4b9-4054-bde3-39baf7994db8" (UID: "9965de9d-d4b9-4054-bde3-39baf7994db8"). InnerVolumeSpecName "kube-api-access-m4ckz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.703003 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9965de9d-d4b9-4054-bde3-39baf7994db8" (UID: "9965de9d-d4b9-4054-bde3-39baf7994db8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.779397 4847 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9965de9d-d4b9-4054-bde3-39baf7994db8-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.779460 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4ckz\" (UniqueName: \"kubernetes.io/projected/9965de9d-d4b9-4054-bde3-39baf7994db8-kube-api-access-m4ckz\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:03 crc kubenswrapper[4847]: I0218 02:00:03.779474 4847 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9965de9d-d4b9-4054-bde3-39baf7994db8-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:04 crc kubenswrapper[4847]: I0218 02:00:04.097375 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" event={"ID":"9965de9d-d4b9-4054-bde3-39baf7994db8","Type":"ContainerDied","Data":"8ebfc7b0bd6965283d0e34ae8c8813e87f6ea5cd536c4605a9f5b82ed656c4c4"} Feb 18 02:00:04 crc kubenswrapper[4847]: I0218 02:00:04.097440 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebfc7b0bd6965283d0e34ae8c8813e87f6ea5cd536c4605a9f5b82ed656c4c4" Feb 18 02:00:04 crc kubenswrapper[4847]: I0218 02:00:04.097552 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29523000-qzt2q" Feb 18 02:00:04 crc kubenswrapper[4847]: I0218 02:00:04.598213 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b"] Feb 18 02:00:04 crc kubenswrapper[4847]: I0218 02:00:04.608758 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522955-2nc4b"] Feb 18 02:00:05 crc kubenswrapper[4847]: I0218 02:00:05.421535 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eeb9922-af86-4d31-8c27-2c32c5a6e178" path="/var/lib/kubelet/pods/6eeb9922-af86-4d31-8c27-2c32c5a6e178/volumes" Feb 18 02:00:06 crc kubenswrapper[4847]: I0218 02:00:06.124418 4847 generic.go:334] "Generic (PLEG): container finished" podID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerID="f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989" exitCode=0 Feb 18 02:00:06 crc kubenswrapper[4847]: I0218 02:00:06.124494 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-69r5f/must-gather-gssps" event={"ID":"98e7900f-9560-4111-a5fd-40d31cab3a0b","Type":"ContainerDied","Data":"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989"} Feb 18 02:00:06 crc kubenswrapper[4847]: I0218 02:00:06.125410 4847 scope.go:117] "RemoveContainer" containerID="f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989" Feb 18 02:00:06 crc kubenswrapper[4847]: I0218 02:00:06.838981 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-69r5f_must-gather-gssps_98e7900f-9560-4111-a5fd-40d31cab3a0b/gather/0.log" Feb 18 02:00:08 crc kubenswrapper[4847]: I0218 02:00:08.408354 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:00:08 crc kubenswrapper[4847]: E0218 02:00:08.409575 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:00:12 crc kubenswrapper[4847]: E0218 02:00:12.406787 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:00:13 crc kubenswrapper[4847]: E0218 02:00:13.406928 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.244242 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-69r5f/must-gather-gssps"] Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.244942 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-69r5f/must-gather-gssps" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="copy" containerID="cri-o://c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07" gracePeriod=2 Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.259867 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-69r5f/must-gather-gssps"] Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.798241 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-69r5f_must-gather-gssps_98e7900f-9560-4111-a5fd-40d31cab3a0b/copy/0.log" Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.798838 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.885772 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output\") pod \"98e7900f-9560-4111-a5fd-40d31cab3a0b\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.885889 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2jhx\" (UniqueName: \"kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx\") pod \"98e7900f-9560-4111-a5fd-40d31cab3a0b\" (UID: \"98e7900f-9560-4111-a5fd-40d31cab3a0b\") " Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.893269 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx" (OuterVolumeSpecName: "kube-api-access-d2jhx") pod "98e7900f-9560-4111-a5fd-40d31cab3a0b" (UID: "98e7900f-9560-4111-a5fd-40d31cab3a0b"). InnerVolumeSpecName "kube-api-access-d2jhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:00:15 crc kubenswrapper[4847]: I0218 02:00:15.988342 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2jhx\" (UniqueName: \"kubernetes.io/projected/98e7900f-9560-4111-a5fd-40d31cab3a0b-kube-api-access-d2jhx\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.074740 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "98e7900f-9560-4111-a5fd-40d31cab3a0b" (UID: "98e7900f-9560-4111-a5fd-40d31cab3a0b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.090079 4847 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/98e7900f-9560-4111-a5fd-40d31cab3a0b-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.248763 4847 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-69r5f_must-gather-gssps_98e7900f-9560-4111-a5fd-40d31cab3a0b/copy/0.log" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.249236 4847 generic.go:334] "Generic (PLEG): container finished" podID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerID="c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07" exitCode=143 Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.249275 4847 scope.go:117] "RemoveContainer" containerID="c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.249377 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-69r5f/must-gather-gssps" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.286003 4847 scope.go:117] "RemoveContainer" containerID="f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.378207 4847 scope.go:117] "RemoveContainer" containerID="c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07" Feb 18 02:00:16 crc kubenswrapper[4847]: E0218 02:00:16.378724 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07\": container with ID starting with c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07 not found: ID does not exist" containerID="c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.378768 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07"} err="failed to get container status \"c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07\": rpc error: code = NotFound desc = could not find container \"c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07\": container with ID starting with c6019846a0e31c8d4fc024c601869173e8c18cde78964ec9db834278aa07fd07 not found: ID does not exist" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.378800 4847 scope.go:117] "RemoveContainer" containerID="f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989" Feb 18 02:00:16 crc kubenswrapper[4847]: E0218 02:00:16.379213 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989\": container with ID starting with f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989 not found: ID does not exist" containerID="f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989" Feb 18 02:00:16 crc kubenswrapper[4847]: I0218 02:00:16.379243 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989"} err="failed to get container status \"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989\": rpc error: code = NotFound desc = could not find container \"f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989\": container with ID starting with f630250db5843394982839c70f9ebc3683447f40e6213b4d4cbe5251185f3989 not found: ID does not exist" Feb 18 02:00:17 crc kubenswrapper[4847]: I0218 02:00:17.415298 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" path="/var/lib/kubelet/pods/98e7900f-9560-4111-a5fd-40d31cab3a0b/volumes" Feb 18 02:00:22 crc kubenswrapper[4847]: I0218 02:00:22.404736 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:00:22 crc kubenswrapper[4847]: E0218 02:00:22.405576 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:00:24 crc kubenswrapper[4847]: E0218 02:00:24.411861 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:00:26 crc kubenswrapper[4847]: E0218 02:00:26.406497 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:00:34 crc kubenswrapper[4847]: I0218 02:00:34.406831 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:00:34 crc kubenswrapper[4847]: E0218 02:00:34.408840 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:00:36 crc kubenswrapper[4847]: I0218 02:00:36.722406 4847 scope.go:117] "RemoveContainer" containerID="c288a9e83d0ecb0b38df3eb2ed359301d6e0d77dc9d091276c7de97d439d8513" Feb 18 02:00:37 crc kubenswrapper[4847]: E0218 02:00:37.412276 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:00:39 crc kubenswrapper[4847]: E0218 02:00:39.406505 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:00:47 crc kubenswrapper[4847]: I0218 02:00:47.431894 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:00:47 crc kubenswrapper[4847]: E0218 02:00:47.433842 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:00:49 crc kubenswrapper[4847]: E0218 02:00:49.407081 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:00:54 crc kubenswrapper[4847]: E0218 02:00:54.407649 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.159880 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29523001-rwg8l"] Feb 18 02:01:00 crc kubenswrapper[4847]: E0218 02:01:00.161082 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9965de9d-d4b9-4054-bde3-39baf7994db8" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161098 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="9965de9d-d4b9-4054-bde3-39baf7994db8" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4847]: E0218 02:01:00.161119 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="gather" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161129 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="gather" Feb 18 02:01:00 crc kubenswrapper[4847]: E0218 02:01:00.161149 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="copy" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161158 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="copy" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161402 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9965de9d-d4b9-4054-bde3-39baf7994db8" containerName="collect-profiles" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161414 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="copy" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.161432 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e7900f-9560-4111-a5fd-40d31cab3a0b" containerName="gather" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.162546 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.191514 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523001-rwg8l"] Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.324337 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.324399 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.324499 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnl52\" (UniqueName: \"kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.324634 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: E0218 02:01:00.408238 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.426533 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.426711 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.426747 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.426815 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnl52\" (UniqueName: \"kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.434382 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.435056 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.438661 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.457455 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnl52\" (UniqueName: \"kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52\") pod \"keystone-cron-29523001-rwg8l\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.491818 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:00 crc kubenswrapper[4847]: I0218 02:01:00.825270 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29523001-rwg8l"] Feb 18 02:01:00 crc kubenswrapper[4847]: W0218 02:01:00.831953 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ca5b7f_5f7e_4d52_8bff_2317544aaa20.slice/crio-890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048 WatchSource:0}: Error finding container 890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048: Status 404 returned error can't find the container with id 890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048 Feb 18 02:01:01 crc kubenswrapper[4847]: I0218 02:01:01.862458 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-rwg8l" event={"ID":"73ca5b7f-5f7e-4d52-8bff-2317544aaa20","Type":"ContainerStarted","Data":"147aed73aa20a66532028d8a8524109b4106a12c0ac5932517abfd7beec948a0"} Feb 18 02:01:01 crc kubenswrapper[4847]: I0218 02:01:01.862888 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-rwg8l" event={"ID":"73ca5b7f-5f7e-4d52-8bff-2317544aaa20","Type":"ContainerStarted","Data":"890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048"} Feb 18 02:01:01 crc kubenswrapper[4847]: I0218 02:01:01.895583 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29523001-rwg8l" podStartSLOduration=1.895565339 podStartE2EDuration="1.895565339s" podCreationTimestamp="2026-02-18 02:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 02:01:01.876695826 +0000 UTC m=+5735.254046778" watchObservedRunningTime="2026-02-18 02:01:01.895565339 +0000 UTC m=+5735.272916281" Feb 18 02:01:02 crc kubenswrapper[4847]: I0218 02:01:02.404834 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:01:02 crc kubenswrapper[4847]: E0218 02:01:02.405083 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:01:04 crc kubenswrapper[4847]: I0218 02:01:04.894650 4847 generic.go:334] "Generic (PLEG): container finished" podID="73ca5b7f-5f7e-4d52-8bff-2317544aaa20" containerID="147aed73aa20a66532028d8a8524109b4106a12c0ac5932517abfd7beec948a0" exitCode=0 Feb 18 02:01:04 crc kubenswrapper[4847]: I0218 02:01:04.894729 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-rwg8l" event={"ID":"73ca5b7f-5f7e-4d52-8bff-2317544aaa20","Type":"ContainerDied","Data":"147aed73aa20a66532028d8a8524109b4106a12c0ac5932517abfd7beec948a0"} Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.275045 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.368665 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnl52\" (UniqueName: \"kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52\") pod \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.368880 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys\") pod \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.368951 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle\") pod \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.369008 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data\") pod \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\" (UID: \"73ca5b7f-5f7e-4d52-8bff-2317544aaa20\") " Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.375007 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "73ca5b7f-5f7e-4d52-8bff-2317544aaa20" (UID: "73ca5b7f-5f7e-4d52-8bff-2317544aaa20"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.375881 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52" (OuterVolumeSpecName: "kube-api-access-jnl52") pod "73ca5b7f-5f7e-4d52-8bff-2317544aaa20" (UID: "73ca5b7f-5f7e-4d52-8bff-2317544aaa20"). InnerVolumeSpecName "kube-api-access-jnl52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.401804 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73ca5b7f-5f7e-4d52-8bff-2317544aaa20" (UID: "73ca5b7f-5f7e-4d52-8bff-2317544aaa20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.448260 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data" (OuterVolumeSpecName: "config-data") pod "73ca5b7f-5f7e-4d52-8bff-2317544aaa20" (UID: "73ca5b7f-5f7e-4d52-8bff-2317544aaa20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.471639 4847 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-config-data\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.471681 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnl52\" (UniqueName: \"kubernetes.io/projected/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-kube-api-access-jnl52\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.471696 4847 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.471706 4847 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73ca5b7f-5f7e-4d52-8bff-2317544aaa20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.917999 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29523001-rwg8l" event={"ID":"73ca5b7f-5f7e-4d52-8bff-2317544aaa20","Type":"ContainerDied","Data":"890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048"} Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.918314 4847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="890d2837ff814f8394004adfcf1e0bb3aee94d2c59e7b12d2dbcf07273f39048" Feb 18 02:01:06 crc kubenswrapper[4847]: I0218 02:01:06.918057 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29523001-rwg8l" Feb 18 02:01:08 crc kubenswrapper[4847]: E0218 02:01:08.406512 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:01:11 crc kubenswrapper[4847]: E0218 02:01:11.408137 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:01:14 crc kubenswrapper[4847]: I0218 02:01:14.405331 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:01:14 crc kubenswrapper[4847]: E0218 02:01:14.406437 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:01:23 crc kubenswrapper[4847]: E0218 02:01:23.412028 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:01:24 crc kubenswrapper[4847]: E0218 02:01:24.405710 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:01:28 crc kubenswrapper[4847]: I0218 02:01:28.405237 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:01:28 crc kubenswrapper[4847]: E0218 02:01:28.406847 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:01:36 crc kubenswrapper[4847]: E0218 02:01:36.408107 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:01:38 crc kubenswrapper[4847]: E0218 02:01:38.407515 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:01:42 crc kubenswrapper[4847]: I0218 02:01:42.404876 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:01:42 crc kubenswrapper[4847]: E0218 02:01:42.405913 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:01:47 crc kubenswrapper[4847]: E0218 02:01:47.425299 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:01:51 crc kubenswrapper[4847]: E0218 02:01:51.407179 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:01:54 crc kubenswrapper[4847]: I0218 02:01:54.465304 4847 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4dxcv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 02:01:54 crc kubenswrapper[4847]: I0218 02:01:54.465731 4847 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" podUID="a3803e77-d427-4d42-9e2e-c8fa87bca4d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 02:01:54 crc kubenswrapper[4847]: I0218 02:01:54.466705 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-xttb8" podUID="5cb3848f-23f4-4037-876f-e390daafc3ba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 02:01:54 crc kubenswrapper[4847]: I0218 02:01:54.466780 4847 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4dxcv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 18 02:01:54 crc kubenswrapper[4847]: I0218 02:01:54.466812 4847 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4dxcv" podUID="a3803e77-d427-4d42-9e2e-c8fa87bca4d8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 18 02:01:56 crc kubenswrapper[4847]: I0218 02:01:56.406240 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:01:56 crc kubenswrapper[4847]: E0218 02:01:56.407127 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:02:00 crc kubenswrapper[4847]: E0218 02:02:00.407736 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:02:04 crc kubenswrapper[4847]: E0218 02:02:04.407139 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:02:08 crc kubenswrapper[4847]: I0218 02:02:08.406139 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:02:08 crc kubenswrapper[4847]: E0218 02:02:08.406982 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:02:13 crc kubenswrapper[4847]: E0218 02:02:13.408411 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:02:18 crc kubenswrapper[4847]: E0218 02:02:18.408125 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:02:23 crc kubenswrapper[4847]: I0218 02:02:23.404226 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:02:23 crc kubenswrapper[4847]: E0218 02:02:23.406381 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:02:24 crc kubenswrapper[4847]: E0218 02:02:24.411198 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.153578 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:30 crc kubenswrapper[4847]: E0218 02:02:30.154870 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ca5b7f-5f7e-4d52-8bff-2317544aaa20" containerName="keystone-cron" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.154893 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ca5b7f-5f7e-4d52-8bff-2317544aaa20" containerName="keystone-cron" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.155228 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ca5b7f-5f7e-4d52-8bff-2317544aaa20" containerName="keystone-cron" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.158234 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.178647 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.198557 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlnj\" (UniqueName: \"kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.198875 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.200958 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.303416 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlnj\" (UniqueName: \"kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.303963 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.304525 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.304763 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.305091 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.325369 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlnj\" (UniqueName: \"kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj\") pod \"redhat-marketplace-j5hkr\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.495459 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:30 crc kubenswrapper[4847]: I0218 02:02:30.986655 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:30 crc kubenswrapper[4847]: W0218 02:02:30.996282 4847 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af31c1b_56c7_4184_a118_0c831bacd641.slice/crio-c49d20294b86c2b24e9a4c16885a7aea3e1a8500e59d762e4d09e22165e9c71c WatchSource:0}: Error finding container c49d20294b86c2b24e9a4c16885a7aea3e1a8500e59d762e4d09e22165e9c71c: Status 404 returned error can't find the container with id c49d20294b86c2b24e9a4c16885a7aea3e1a8500e59d762e4d09e22165e9c71c Feb 18 02:02:31 crc kubenswrapper[4847]: I0218 02:02:31.053929 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerStarted","Data":"c49d20294b86c2b24e9a4c16885a7aea3e1a8500e59d762e4d09e22165e9c71c"} Feb 18 02:02:32 crc kubenswrapper[4847]: I0218 02:02:32.069873 4847 generic.go:334] "Generic (PLEG): container finished" podID="7af31c1b-56c7-4184-a118-0c831bacd641" containerID="5236b0ba034bc5b64f2baa5bc2cea22fe43198fb6b2ca3daea7f28e227760be4" exitCode=0 Feb 18 02:02:32 crc kubenswrapper[4847]: I0218 02:02:32.069939 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerDied","Data":"5236b0ba034bc5b64f2baa5bc2cea22fe43198fb6b2ca3daea7f28e227760be4"} Feb 18 02:02:32 crc kubenswrapper[4847]: E0218 02:02:32.407692 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:02:33 crc kubenswrapper[4847]: I0218 02:02:33.083368 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerStarted","Data":"a23a2d524920843ea41b83b569281ab867d94c4c78bab941808341ec2d82d070"} Feb 18 02:02:34 crc kubenswrapper[4847]: I0218 02:02:34.099156 4847 generic.go:334] "Generic (PLEG): container finished" podID="7af31c1b-56c7-4184-a118-0c831bacd641" containerID="a23a2d524920843ea41b83b569281ab867d94c4c78bab941808341ec2d82d070" exitCode=0 Feb 18 02:02:34 crc kubenswrapper[4847]: I0218 02:02:34.099226 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerDied","Data":"a23a2d524920843ea41b83b569281ab867d94c4c78bab941808341ec2d82d070"} Feb 18 02:02:35 crc kubenswrapper[4847]: I0218 02:02:35.114474 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerStarted","Data":"4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25"} Feb 18 02:02:35 crc kubenswrapper[4847]: I0218 02:02:35.159624 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5hkr" podStartSLOduration=2.665986453 podStartE2EDuration="5.159571616s" podCreationTimestamp="2026-02-18 02:02:30 +0000 UTC" firstStartedPulling="2026-02-18 02:02:32.073407884 +0000 UTC m=+5825.450758856" lastFinishedPulling="2026-02-18 02:02:34.566993047 +0000 UTC m=+5827.944344019" observedRunningTime="2026-02-18 02:02:35.138826887 +0000 UTC m=+5828.516177899" watchObservedRunningTime="2026-02-18 02:02:35.159571616 +0000 UTC m=+5828.536922588" Feb 18 02:02:35 crc kubenswrapper[4847]: I0218 02:02:35.404583 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:02:35 crc kubenswrapper[4847]: E0218 02:02:35.404988 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.522993 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.527280 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.549266 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.576918 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.577173 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.577404 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrvnb\" (UniqueName: \"kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.679405 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrvnb\" (UniqueName: \"kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.679574 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.679650 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.680268 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.680429 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.715461 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrvnb\" (UniqueName: \"kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb\") pod \"redhat-operators-dxmh8\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:37 crc kubenswrapper[4847]: I0218 02:02:37.875332 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:38 crc kubenswrapper[4847]: I0218 02:02:38.398194 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:02:38 crc kubenswrapper[4847]: E0218 02:02:38.419246 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:02:39 crc kubenswrapper[4847]: I0218 02:02:39.160226 4847 generic.go:334] "Generic (PLEG): container finished" podID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerID="4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b" exitCode=0 Feb 18 02:02:39 crc kubenswrapper[4847]: I0218 02:02:39.160461 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerDied","Data":"4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b"} Feb 18 02:02:39 crc kubenswrapper[4847]: I0218 02:02:39.160487 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerStarted","Data":"d51f5f38678c618160a0b36a56002c38a3583471ee1180c718f69adcaaecb8af"} Feb 18 02:02:40 crc kubenswrapper[4847]: I0218 02:02:40.496193 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:40 crc kubenswrapper[4847]: I0218 02:02:40.497473 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:40 crc kubenswrapper[4847]: I0218 02:02:40.571567 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:41 crc kubenswrapper[4847]: I0218 02:02:41.185270 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerStarted","Data":"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25"} Feb 18 02:02:41 crc kubenswrapper[4847]: I0218 02:02:41.258475 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:42 crc kubenswrapper[4847]: I0218 02:02:42.107575 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:43 crc kubenswrapper[4847]: I0218 02:02:43.241435 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5hkr" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="registry-server" containerID="cri-o://4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25" gracePeriod=2 Feb 18 02:02:43 crc kubenswrapper[4847]: E0218 02:02:43.392964 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af31c1b_56c7_4184_a118_0c831bacd641.slice/crio-4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25.scope\": RecentStats: unable to find data in memory cache]" Feb 18 02:02:44 crc kubenswrapper[4847]: I0218 02:02:44.256554 4847 generic.go:334] "Generic (PLEG): container finished" podID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerID="67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25" exitCode=0 Feb 18 02:02:44 crc kubenswrapper[4847]: I0218 02:02:44.256673 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerDied","Data":"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25"} Feb 18 02:02:45 crc kubenswrapper[4847]: I0218 02:02:45.271907 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerStarted","Data":"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69"} Feb 18 02:02:45 crc kubenswrapper[4847]: I0218 02:02:45.298278 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxmh8" podStartSLOduration=2.728398268 podStartE2EDuration="8.298256934s" podCreationTimestamp="2026-02-18 02:02:37 +0000 UTC" firstStartedPulling="2026-02-18 02:02:39.162083344 +0000 UTC m=+5832.539434286" lastFinishedPulling="2026-02-18 02:02:44.731942 +0000 UTC m=+5838.109292952" observedRunningTime="2026-02-18 02:02:45.290204987 +0000 UTC m=+5838.667555939" watchObservedRunningTime="2026-02-18 02:02:45.298256934 +0000 UTC m=+5838.675607876" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.291578 4847 generic.go:334] "Generic (PLEG): container finished" podID="7af31c1b-56c7-4184-a118-0c831bacd641" containerID="4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25" exitCode=0 Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.291644 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerDied","Data":"4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25"} Feb 18 02:02:46 crc kubenswrapper[4847]: E0218 02:02:46.410425 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.504802 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.615976 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities\") pod \"7af31c1b-56c7-4184-a118-0c831bacd641\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.616211 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlnj\" (UniqueName: \"kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj\") pod \"7af31c1b-56c7-4184-a118-0c831bacd641\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.616239 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content\") pod \"7af31c1b-56c7-4184-a118-0c831bacd641\" (UID: \"7af31c1b-56c7-4184-a118-0c831bacd641\") " Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.617124 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities" (OuterVolumeSpecName: "utilities") pod "7af31c1b-56c7-4184-a118-0c831bacd641" (UID: "7af31c1b-56c7-4184-a118-0c831bacd641"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.623087 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj" (OuterVolumeSpecName: "kube-api-access-nqlnj") pod "7af31c1b-56c7-4184-a118-0c831bacd641" (UID: "7af31c1b-56c7-4184-a118-0c831bacd641"). InnerVolumeSpecName "kube-api-access-nqlnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.637696 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7af31c1b-56c7-4184-a118-0c831bacd641" (UID: "7af31c1b-56c7-4184-a118-0c831bacd641"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.717687 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlnj\" (UniqueName: \"kubernetes.io/projected/7af31c1b-56c7-4184-a118-0c831bacd641-kube-api-access-nqlnj\") on node \"crc\" DevicePath \"\"" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.717726 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:02:46 crc kubenswrapper[4847]: I0218 02:02:46.717736 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af31c1b-56c7-4184-a118-0c831bacd641-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.303407 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5hkr" event={"ID":"7af31c1b-56c7-4184-a118-0c831bacd641","Type":"ContainerDied","Data":"c49d20294b86c2b24e9a4c16885a7aea3e1a8500e59d762e4d09e22165e9c71c"} Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.303462 4847 scope.go:117] "RemoveContainer" containerID="4e04070d7a0fa1d0604923ca30ca54a09b9f87cc09caf5448bd3ee9a68539a25" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.303463 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5hkr" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.340248 4847 scope.go:117] "RemoveContainer" containerID="a23a2d524920843ea41b83b569281ab867d94c4c78bab941808341ec2d82d070" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.349474 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.357387 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5hkr"] Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.416859 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" path="/var/lib/kubelet/pods/7af31c1b-56c7-4184-a118-0c831bacd641/volumes" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.690758 4847 scope.go:117] "RemoveContainer" containerID="5236b0ba034bc5b64f2baa5bc2cea22fe43198fb6b2ca3daea7f28e227760be4" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.875933 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:47 crc kubenswrapper[4847]: I0218 02:02:47.876275 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:48 crc kubenswrapper[4847]: I0218 02:02:48.926927 4847 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dxmh8" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="registry-server" probeResult="failure" output=< Feb 18 02:02:48 crc kubenswrapper[4847]: timeout: failed to connect service ":50051" within 1s Feb 18 02:02:48 crc kubenswrapper[4847]: > Feb 18 02:02:50 crc kubenswrapper[4847]: I0218 02:02:50.406346 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:02:50 crc kubenswrapper[4847]: E0218 02:02:50.407330 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:02:50 crc kubenswrapper[4847]: E0218 02:02:50.408780 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:02:57 crc kubenswrapper[4847]: E0218 02:02:57.415732 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:02:57 crc kubenswrapper[4847]: I0218 02:02:57.964755 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:58 crc kubenswrapper[4847]: I0218 02:02:58.060208 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:02:58 crc kubenswrapper[4847]: I0218 02:02:58.233491 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:02:59 crc kubenswrapper[4847]: I0218 02:02:59.453476 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxmh8" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="registry-server" containerID="cri-o://4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69" gracePeriod=2 Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.030580 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.045328 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content\") pod \"a932516c-14a3-470a-8df7-4dccaa07c9f0\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.045769 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities\") pod \"a932516c-14a3-470a-8df7-4dccaa07c9f0\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.045907 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrvnb\" (UniqueName: \"kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb\") pod \"a932516c-14a3-470a-8df7-4dccaa07c9f0\" (UID: \"a932516c-14a3-470a-8df7-4dccaa07c9f0\") " Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.050107 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities" (OuterVolumeSpecName: "utilities") pod "a932516c-14a3-470a-8df7-4dccaa07c9f0" (UID: "a932516c-14a3-470a-8df7-4dccaa07c9f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.058049 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb" (OuterVolumeSpecName: "kube-api-access-nrvnb") pod "a932516c-14a3-470a-8df7-4dccaa07c9f0" (UID: "a932516c-14a3-470a-8df7-4dccaa07c9f0"). InnerVolumeSpecName "kube-api-access-nrvnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.149465 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrvnb\" (UniqueName: \"kubernetes.io/projected/a932516c-14a3-470a-8df7-4dccaa07c9f0-kube-api-access-nrvnb\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.149567 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.196583 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a932516c-14a3-470a-8df7-4dccaa07c9f0" (UID: "a932516c-14a3-470a-8df7-4dccaa07c9f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.251286 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a932516c-14a3-470a-8df7-4dccaa07c9f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.483722 4847 generic.go:334] "Generic (PLEG): container finished" podID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerID="4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69" exitCode=0 Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.483834 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerDied","Data":"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69"} Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.483887 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxmh8" event={"ID":"a932516c-14a3-470a-8df7-4dccaa07c9f0","Type":"ContainerDied","Data":"d51f5f38678c618160a0b36a56002c38a3583471ee1180c718f69adcaaecb8af"} Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.483929 4847 scope.go:117] "RemoveContainer" containerID="4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.484347 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxmh8" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.521718 4847 scope.go:117] "RemoveContainer" containerID="67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.539899 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.550735 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxmh8"] Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.567819 4847 scope.go:117] "RemoveContainer" containerID="4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.628216 4847 scope.go:117] "RemoveContainer" containerID="4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69" Feb 18 02:03:00 crc kubenswrapper[4847]: E0218 02:03:00.629123 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69\": container with ID starting with 4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69 not found: ID does not exist" containerID="4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.629399 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69"} err="failed to get container status \"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69\": rpc error: code = NotFound desc = could not find container \"4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69\": container with ID starting with 4376dfe5779a9730a2435047569c077ea4d7cf77f103f90fad05f7893540ea69 not found: ID does not exist" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.629570 4847 scope.go:117] "RemoveContainer" containerID="67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25" Feb 18 02:03:00 crc kubenswrapper[4847]: E0218 02:03:00.630473 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25\": container with ID starting with 67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25 not found: ID does not exist" containerID="67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.630534 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25"} err="failed to get container status \"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25\": rpc error: code = NotFound desc = could not find container \"67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25\": container with ID starting with 67a7d0e93f8752712b9e1ce94cc3bef6385d60221c9d39994b5d98633e1b4c25 not found: ID does not exist" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.630564 4847 scope.go:117] "RemoveContainer" containerID="4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b" Feb 18 02:03:00 crc kubenswrapper[4847]: E0218 02:03:00.630881 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b\": container with ID starting with 4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b not found: ID does not exist" containerID="4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b" Feb 18 02:03:00 crc kubenswrapper[4847]: I0218 02:03:00.630946 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b"} err="failed to get container status \"4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b\": rpc error: code = NotFound desc = could not find container \"4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b\": container with ID starting with 4e76ecb793ab814e64240d874475308bba1eacf18a6bcd60444381b7320dfc9b not found: ID does not exist" Feb 18 02:03:01 crc kubenswrapper[4847]: I0218 02:03:01.427053 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" path="/var/lib/kubelet/pods/a932516c-14a3-470a-8df7-4dccaa07c9f0/volumes" Feb 18 02:03:02 crc kubenswrapper[4847]: I0218 02:03:02.404181 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:03:02 crc kubenswrapper[4847]: E0218 02:03:02.404935 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:03:03 crc kubenswrapper[4847]: E0218 02:03:03.408304 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:03:10 crc kubenswrapper[4847]: E0218 02:03:10.408383 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:03:14 crc kubenswrapper[4847]: E0218 02:03:14.408798 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:03:17 crc kubenswrapper[4847]: I0218 02:03:17.419449 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:03:17 crc kubenswrapper[4847]: E0218 02:03:17.420892 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:03:21 crc kubenswrapper[4847]: E0218 02:03:21.406938 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:03:26 crc kubenswrapper[4847]: E0218 02:03:26.409423 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:03:28 crc kubenswrapper[4847]: I0218 02:03:28.405916 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:03:28 crc kubenswrapper[4847]: E0218 02:03:28.406876 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:03:33 crc kubenswrapper[4847]: E0218 02:03:33.406882 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:03:39 crc kubenswrapper[4847]: E0218 02:03:39.407421 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:03:42 crc kubenswrapper[4847]: I0218 02:03:42.421877 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:03:42 crc kubenswrapper[4847]: E0218 02:03:42.422468 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:03:46 crc kubenswrapper[4847]: I0218 02:03:46.408986 4847 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 02:03:46 crc kubenswrapper[4847]: E0218 02:03:46.510507 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 02:03:46 crc kubenswrapper[4847]: E0218 02:03:46.510583 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 18 02:03:46 crc kubenswrapper[4847]: E0218 02:03:46.510783 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjwt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-k4t5r_openstack(452f74c1-fa5f-464b-9943-a4a1c2d5c48a): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:03:46 crc kubenswrapper[4847]: E0218 02:03:46.512185 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:03:54 crc kubenswrapper[4847]: E0218 02:03:54.410765 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:03:56 crc kubenswrapper[4847]: I0218 02:03:56.405486 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:03:56 crc kubenswrapper[4847]: E0218 02:03:56.406843 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:04:01 crc kubenswrapper[4847]: E0218 02:04:01.406982 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:04:07 crc kubenswrapper[4847]: I0218 02:04:07.414929 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:04:07 crc kubenswrapper[4847]: E0218 02:04:07.417589 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:04:07 crc kubenswrapper[4847]: E0218 02:04:07.418041 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:04:14 crc kubenswrapper[4847]: E0218 02:04:14.406313 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:04:21 crc kubenswrapper[4847]: E0218 02:04:21.532346 4847 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:04:21 crc kubenswrapper[4847]: E0218 02:04:21.533715 4847 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 18 02:04:21 crc kubenswrapper[4847]: E0218 02:04:21.533965 4847 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9fhbdh67dh64h568hch65dhb8h67chf7h66dhfh585h565h85h554h68ch8bh56ch677h57bhb5h544h575hd7h5f8hc7h66dh669h57bh589h56cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6c4d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 18 02:04:21 crc kubenswrapper[4847]: E0218 02:04:21.535338 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:04:22 crc kubenswrapper[4847]: I0218 02:04:22.404437 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:04:22 crc kubenswrapper[4847]: E0218 02:04:22.404926 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:04:27 crc kubenswrapper[4847]: E0218 02:04:27.412909 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:04:36 crc kubenswrapper[4847]: I0218 02:04:36.406186 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:04:36 crc kubenswrapper[4847]: E0218 02:04:36.407129 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:04:36 crc kubenswrapper[4847]: E0218 02:04:36.407924 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:04:40 crc kubenswrapper[4847]: E0218 02:04:40.407771 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.074709 4847 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081149 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="extract-content" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081193 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="extract-content" Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081226 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081234 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081258 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="extract-utilities" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081265 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="extract-utilities" Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081284 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081290 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081304 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="extract-utilities" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081313 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="extract-utilities" Feb 18 02:04:44 crc kubenswrapper[4847]: E0218 02:04:44.081327 4847 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="extract-content" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081333 4847 state_mem.go:107] "Deleted CPUSet assignment" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="extract-content" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081602 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af31c1b-56c7-4184-a118-0c831bacd641" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.081647 4847 memory_manager.go:354] "RemoveStaleState removing state" podUID="a932516c-14a3-470a-8df7-4dccaa07c9f0" containerName="registry-server" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.083636 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.106665 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.158074 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gptf\" (UniqueName: \"kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.158129 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.158164 4847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.259798 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gptf\" (UniqueName: \"kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.259852 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.259894 4847 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.260353 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.260394 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.279137 4847 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gptf\" (UniqueName: \"kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf\") pod \"certified-operators-22dm5\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.411520 4847 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:44 crc kubenswrapper[4847]: I0218 02:04:44.987014 4847 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:45 crc kubenswrapper[4847]: I0218 02:04:45.910801 4847 generic.go:334] "Generic (PLEG): container finished" podID="bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" containerID="65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9" exitCode=0 Feb 18 02:04:45 crc kubenswrapper[4847]: I0218 02:04:45.910916 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerDied","Data":"65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9"} Feb 18 02:04:45 crc kubenswrapper[4847]: I0218 02:04:45.911237 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerStarted","Data":"1910a934d5e3597fe82401750aa3cc16add07c1156de286def40a1eb9add8e89"} Feb 18 02:04:47 crc kubenswrapper[4847]: I0218 02:04:47.935242 4847 generic.go:334] "Generic (PLEG): container finished" podID="bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" containerID="9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d" exitCode=0 Feb 18 02:04:47 crc kubenswrapper[4847]: I0218 02:04:47.935393 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerDied","Data":"9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d"} Feb 18 02:04:48 crc kubenswrapper[4847]: I0218 02:04:48.949135 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerStarted","Data":"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2"} Feb 18 02:04:48 crc kubenswrapper[4847]: I0218 02:04:48.977454 4847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-22dm5" podStartSLOduration=2.535552493 podStartE2EDuration="4.977432451s" podCreationTimestamp="2026-02-18 02:04:44 +0000 UTC" firstStartedPulling="2026-02-18 02:04:45.914041825 +0000 UTC m=+5959.291392767" lastFinishedPulling="2026-02-18 02:04:48.355921783 +0000 UTC m=+5961.733272725" observedRunningTime="2026-02-18 02:04:48.972832078 +0000 UTC m=+5962.350183030" watchObservedRunningTime="2026-02-18 02:04:48.977432451 +0000 UTC m=+5962.354783403" Feb 18 02:04:49 crc kubenswrapper[4847]: I0218 02:04:49.404913 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:04:49 crc kubenswrapper[4847]: E0218 02:04:49.405301 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xsj47_openshift-machine-config-operator(ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5)\"" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" podUID="ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5" Feb 18 02:04:50 crc kubenswrapper[4847]: E0218 02:04:50.407438 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:04:53 crc kubenswrapper[4847]: E0218 02:04:53.407047 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:04:54 crc kubenswrapper[4847]: I0218 02:04:54.412598 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:54 crc kubenswrapper[4847]: I0218 02:04:54.414574 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:54 crc kubenswrapper[4847]: I0218 02:04:54.482282 4847 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:55 crc kubenswrapper[4847]: I0218 02:04:55.942683 4847 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:56 crc kubenswrapper[4847]: I0218 02:04:56.020397 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.068297 4847 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-22dm5" podUID="bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" containerName="registry-server" containerID="cri-o://8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2" gracePeriod=2 Feb 18 02:04:57 crc kubenswrapper[4847]: E0218 02:04:57.245184 4847 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf862fa5_6af4_41d7_9b30_d6ed5111c4fe.slice/crio-8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf862fa5_6af4_41d7_9b30_d6ed5111c4fe.slice/crio-conmon-8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2.scope\": RecentStats: unable to find data in memory cache]" Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.649097 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.681945 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gptf\" (UniqueName: \"kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf\") pod \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.682028 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities\") pod \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.682191 4847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content\") pod \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\" (UID: \"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe\") " Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.684654 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities" (OuterVolumeSpecName: "utilities") pod "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" (UID: "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.695391 4847 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.757348 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" (UID: "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 18 02:04:57 crc kubenswrapper[4847]: I0218 02:04:57.797657 4847 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.083002 4847 generic.go:334] "Generic (PLEG): container finished" podID="bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" containerID="8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2" exitCode=0 Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.083112 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerDied","Data":"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2"} Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.083175 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-22dm5" event={"ID":"bf862fa5-6af4-41d7-9b30-d6ed5111c4fe","Type":"ContainerDied","Data":"1910a934d5e3597fe82401750aa3cc16add07c1156de286def40a1eb9add8e89"} Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.083196 4847 scope.go:117] "RemoveContainer" containerID="8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.083354 4847 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-22dm5" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.114624 4847 scope.go:117] "RemoveContainer" containerID="9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.175136 4847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf" (OuterVolumeSpecName: "kube-api-access-2gptf") pod "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" (UID: "bf862fa5-6af4-41d7-9b30-d6ed5111c4fe"). InnerVolumeSpecName "kube-api-access-2gptf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.198369 4847 scope.go:117] "RemoveContainer" containerID="65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.205397 4847 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gptf\" (UniqueName: \"kubernetes.io/projected/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe-kube-api-access-2gptf\") on node \"crc\" DevicePath \"\"" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.317437 4847 scope.go:117] "RemoveContainer" containerID="8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2" Feb 18 02:04:58 crc kubenswrapper[4847]: E0218 02:04:58.317884 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2\": container with ID starting with 8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2 not found: ID does not exist" containerID="8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.317922 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2"} err="failed to get container status \"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2\": rpc error: code = NotFound desc = could not find container \"8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2\": container with ID starting with 8a7684d97c5af27f77d7c6313ed98aeadc711d0e9c7d4c4745a1ac42a8d526b2 not found: ID does not exist" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.317952 4847 scope.go:117] "RemoveContainer" containerID="9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d" Feb 18 02:04:58 crc kubenswrapper[4847]: E0218 02:04:58.318174 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d\": container with ID starting with 9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d not found: ID does not exist" containerID="9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.318201 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d"} err="failed to get container status \"9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d\": rpc error: code = NotFound desc = could not find container \"9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d\": container with ID starting with 9233404d0433a8c2ce612f7ee33d3a77c55cef11ed65e085d103d57896e9d19d not found: ID does not exist" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.318218 4847 scope.go:117] "RemoveContainer" containerID="65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9" Feb 18 02:04:58 crc kubenswrapper[4847]: E0218 02:04:58.318439 4847 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9\": container with ID starting with 65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9 not found: ID does not exist" containerID="65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.318485 4847 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9"} err="failed to get container status \"65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9\": rpc error: code = NotFound desc = could not find container \"65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9\": container with ID starting with 65df47f1c7060ac082575013a70199994cfc8ff6fa26b3a869593eb61cca4fb9 not found: ID does not exist" Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.426588 4847 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:58 crc kubenswrapper[4847]: I0218 02:04:58.437150 4847 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-22dm5"] Feb 18 02:04:59 crc kubenswrapper[4847]: I0218 02:04:59.427915 4847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf862fa5-6af4-41d7-9b30-d6ed5111c4fe" path="/var/lib/kubelet/pods/bf862fa5-6af4-41d7-9b30-d6ed5111c4fe/volumes" Feb 18 02:05:02 crc kubenswrapper[4847]: I0218 02:05:02.404970 4847 scope.go:117] "RemoveContainer" containerID="3d23c310c10821dd1ce8e5d6baa74225ba29df34616a6cdf95639f6bc0b3b7bc" Feb 18 02:05:03 crc kubenswrapper[4847]: I0218 02:05:03.156965 4847 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xsj47" event={"ID":"ec351c0c-107b-4bfd-ae6b-1e6ae2c22bd5","Type":"ContainerStarted","Data":"b06251f6ec5bbdddd1f3474be60fbf16258e204f2cf205a81eec4c6b7dce7bd0"} Feb 18 02:05:04 crc kubenswrapper[4847]: E0218 02:05:04.407190 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:05:05 crc kubenswrapper[4847]: E0218 02:05:05.412627 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:05:17 crc kubenswrapper[4847]: E0218 02:05:17.420060 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:05:18 crc kubenswrapper[4847]: E0218 02:05:18.409130 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:05:30 crc kubenswrapper[4847]: E0218 02:05:30.408896 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:05:30 crc kubenswrapper[4847]: E0218 02:05:30.408906 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:05:41 crc kubenswrapper[4847]: E0218 02:05:41.407163 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:05:45 crc kubenswrapper[4847]: E0218 02:05:45.407369 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6" Feb 18 02:05:55 crc kubenswrapper[4847]: E0218 02:05:55.408077 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-k4t5r" podUID="452f74c1-fa5f-464b-9943-a4a1c2d5c48a" Feb 18 02:06:00 crc kubenswrapper[4847]: E0218 02:06:00.408534 4847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="7de7b1e6-0511-4cd1-a4b2-d5b03e727ac6"